I am not sure what you are saying other than you and your students use an excellent approach to analysis that will yield great results identifying issues and their resolution.
I expected nothing less from yourself and you did not disappoint.
I am not sure what you are saying other than you and your students use an excellent approach to analysis that will yield great results identifying issues and their resolution.
I expected nothing less from yourself and you did not disappoint.
Dizzy Gillespie: "Some days you get up and you put the horn to your chops and it sounds pretty good and you win. Some days you try and nothing works and the horn wins. This goes on and on and then you die, and the horn wins."
Who is to say whether on those days when the horn won the mouthpiece was not clocked advantageously, or the gap was not providential or the mouthpiece was a little loose in the receiver or something else we now accept as really helping was not applied.
All we have is our sound, and if something helps whether that be a known adjustment or a psychological crutch I dont care, it works out the same to the audience.
More than likely this is an effective adjustment.
If it helps use it.
All that matters is reliably discovering what really does help.
Great post, I agree as far as you go but my caveat to it is the aerosols and sludge you speak of take time to harden and set onto the surfaces inside the instrument. If you disturb them before they have set solid they are gone.
This is the secret to a clean instrument free of accretions They are flushed away with daily cleaning before they become a problem.
The gunk only sets into concrete and techs need to shift them if they are allowed time to harden.
As they say a rolling stone gathers no moss.
An interesting post.
First of all I respect the heck out of Flugelgirl Trumpetplus ROWUK and the other exceptional techs teachers and players
What I have found over the years is that other players grossly under lubricate, grossly under maintain, and grossly under clean, their instruments.
Keep the instrument squeeky clean, lubricate liberally and set up the instrument properly.
This takes almost no effort to do.
It seems that 99.99 % of players simply play their instrument chuck a bit of oil in now and again when the vales are dry and call that good maintenance.
Maybe I am in a strange place but every player I have spoken with here has declared that they are having huge issues with their valves. That should not be.
I see a major issue here and I approve of your highlighting the lack of care that you have to deal with on a daily basis Flugelgirl.
I am not trying to say I am great here, but there must be a reason that every single instrument I have ever owned and played and we are talking dozens of all ages, came to me in an unuseable state and now have perfect valves that never bind or hang.
Yes players who cannot look after their valves will have massive trouble with their valves, red rot, and a host of other issues, but the answer surely is not to tell them to simply hand the instrument to a tech, the answer surely is to educate them.
I want to say I support my techs and use them often, but its like a car, if you cannot keep the oil topped up the brake fluid levels maintained and the ashtrays clean some driver education is called for.
Louis Armstrong bunged his instrument under the tap after every playing day and thats what I do, and it gets rid of 99% of the crud in the instrument.
Then swabbing the valves and reoiling them gets rid of 99% of the remainder of the crud.
That takes a total of 5 minutes to do.
Oiling the valves every hour or so during playing keeps them running freely and sweet.
Job done. Easy.
a very sensible and correct comment, the valve assembly is indeed only half the valve system, the valve casing also can collect debris and accreted foreign matter. I used to think the same way as you.
I have however extensive experience gained now in several years of chemically deep cleaning the valve body alone, and have never noticed any problem with the casing needing anything beyond simple brush cleaning or swabbing.
Cleaning the valve alone is enough, however illogical that appears to be.
I originally assumed that not deep cleaning the casing as well would eventually cause issues. Those issues have simply never materialised.
I would say that it is very rare that deep cleaning to the valves is needed if you already use a good effective maintenance regime.
I would suggest the following as a guide -
simple instrument cleaning - daily,
valve swabbing - monthly,
valve deep clean - every 5 years (only when really needed),
hand to a tech for chem clean - every 10 years.
I never have any valve problems and I can pick up any of my instruments and the valves are always perfect and work immediately without any oiling.
I should add that one of my instruments is a yamaha YCR2330 mkII in perfect condition and all of my instruments have valves that perform exactly like the yamaha valves no matter what the age of the instrument.
If valves behave as yamaha valves do there isnt much of a problem with them.
Your experience may be different but I do around 30 to 50 hours playing per week on three instruments so I should be seeing a lot of valve problems if my cleaning was not effective and I see no problems at all.
I experience sweet fast buttery valves no hangs no slowdowns no clicks no jams on all valves up to 100 years old.
I will add that it takes me around a month of work to properly clean and prepare valves fixing all the problems after first receiving an instrument during which time it is constant pain and misery.
After that I never see a problem.
@Trumpetsplus agreed
ultrasonic cleaning uses cavitation to loosen dirt, cavitation swiftly erodes solid materials and eats away screw propellers in a marine environment.
cavitation treatments should only be used for minutes and not hours, but they are safe when used for minutes.
effective cleaning must happen and chemical cleaning can also dissolve solid materials. Acids or severe solvents can erode the materials they are cleaning, in particular brasses can be dissolved chemically. This is what causes dezincification of brass, or red rot.
In my somewhat limited experience brass instruments are typically full of crud built up over a period of many years or decades of ineffective cleaning. I have never found one single instrument that was not crud bound.
All crud bound instruments are rendered unuseable because of it, I would ask what is to be done with an unuseable crud bound instrument.
The three choices appear to boil down to these three,
1 allow it to remain unuseable,
2 make it unuseable through damaging it by over cleaning,
3 use careful cleaning,
4 give it to a tech to properly care for it.
3 if you are up to the job yourself or 4 if you are not.
Bruce is up to the job, I believe I am up to the job, the rest is up to the individual, like car maintenance.
I think ultrasonic cleaning is a useful tool, to say avoid it is the same as saying avoid car maintenance at home.
I for one like the raw brass look on your instrument Bruce, and your trumpet tricks series is a great source of information for us, thanks for offering your experience and discoveries.
I totally agree with you deep cleaning valves transforms them, I have been arguing this for years and seemingly nobody has believed me on this.
Former unreliable slow and painful valves that hang up frequently are transformed simply by cleaning them to slick and buttery smooth devices that operate faultlessly exactly as you describe.
This has worked without exception on valves in many different designs and in all materials from brass to monel to nickel to stainless steel and of all ages up to 100 years old and every single valve I have ever maintained and that came to me with hesitant slow sticky and unreliable valves operated perfectly and faultlessly after a deep clean.
Your experience here is fully supported by my experience of over a dozen instruments from 1924 onwards.
Of course I cannot speak for every single valve ever made but I will say this, whenever I hand any of my instruments to brass players for them to try they are genuinely astounded by the valves.
This is my real life experience all valves I have ever used are all exceptional.
I did not mean to hijack your thread Bruce but I wanted to confirm that in my experience you are 100% correct, properly and fully cleaning valves completely transforms an instrument.
I will be adding an ultrasonic cleaner to my christmas list.
Thank you sir.
On the question of whether the cost of refurbishing a horn exceeds the value of the horn after the refurbishment is completed, I have known many cases where that is not true and a profit from refurbishment can therefore be made.
Even after adding the cost of the purchase to the refurbishment the value of the horn can still exceed the total costs.
Real world example of a horn in my possession in ukp purchase price 200 refurbishment price 550 typical sale price on the second hand market in excess of 900.
So it can be done sometimes.
I have also seen Martin committees on buy it now for 1700 refurbishment costs around 550 and they are commonly offered for between 3000 and 6000 in excellent condition.
It is difficult but not impossible, and it help if you are lucky.
just my 2 cents
It is my belief that creating trying to create a close copy of an instrument out of other instruments parts is unlikely to yield success.
Instrument designers had a goal in their designs and they used subtle design elements to reach that goal.
Assembling parts in the hopes of replicating that goal abandons their design philosophy.
I feel the way forward is to research instruments by other makers that also reached that goal and therefore had fewer compromises.
For example Shilke, Reynolds, Martin, Rudy Muck, Bach, Besson, Selmer, Olds, and others. All these created wonderful instruments and there may be a close alternate to the 28b somewhere amongst them.
Reynold Shilke for example designed solutions to intonation problems in his horns by careful placing of braces, constrictions, the shaping of the bell flare, and the tapers.
Such sophisticated design elements cannot be replicated simply by assembling foreign parts from other instruments with different design elements in them, something would have to give, and that might be the tone the core the blow or the intonation.
Just my 2 cents
No I havent had cataract surgery, I am not qualified to comment of the time you should wait before returning after this procedure.
I posted my comments to suggest that you allow yourself the best chance of a swift and issue free recovery.
The doc suggested 10 days, enough said, we are talking health here. Trumpet playing can be stressful on the body as we all know.
I would add the following anecdotal comment, some years ago a severe motorbike accident left me with spinal injuries several fractured vertebra punctured lung and several major broken bones, the neuro surgeon in charge of my case said that a significant number of patients have been discovered to be very slow to recover while others are swift.
We cannot know in advance which group we are in.
I would therefore for safety sake assume myself to be in the very slow to recover group and allow longer than the doctor recommends for recovery unless I have definite evidence to the contrary.
In my opinion it just is not worth gambling my prospects of recovery for the sake of saving a day or two in the recovery.
Just my 2 cents.
Good luck.
Not all trumpet playing is the same.
I would be inclined to begin with gentle playing in the lower register at first so internal pressures do not build.
Then gradually lift the playing to the upper register.
Driving out high notes has been suggested as a contributing factor to eye problems in some players when taken to excess so some care in this area may be wise.
My mistake
These are chat room rules and I dont do chat rooms
I dont need a chat room.
I like to learn new things and that often means lengthy posts.
I was putting lengthy posts in a chat room.
I wish you well with your chatting however.
I pretty much expected that response.
The response of a spoiled child who has read up on a subject, is incapable of an adult conversation on the topic and just wants to "prove" how much he knows.
If you dont like what is written you always have the option to not read it, but in your case you choose instead to try to force the writer to obey you and you ridicule them if they dont do that.
I am happy you have chosen silence for the future, but if you do answer any of my post in an adult manner I will be more than happy to have a dialogue with you.
Until then fare well and good luck
you are turning a discussion into an argument again.
Whether you like it or not the fact is the 7 layers exist, from level 7 the application layer that we interact with, to layer 1 the physical layer the lowest hardware level that is implemented for interconnectivity.
The internet is not the only game in town, there is a variety of different connectivity that is not internet based, we are talking radio based, defence based, high security applications, small bespoke implementations that avoid the internet.
NASA for example does not use the internet to communicate with devices in orbit. The Navy does not use the internet to communicate with ships at sea.
And in these lesser observed areas manufacturers implement the 7 layers for safety, data integrity, and security.
As for updating my knowledge I dont need to, I have worked at all levels in support of pc's, peripherals, servers, server farms, clusters, - Individuals, Companies, Corporations, Defence organisations, Banks, Hospitals, Government, the Armed forces.
The majority of my work was internet based and companies often cut corners there, but the internet was only one small part of the entire mix of connectivities I was responsible for supporting.
I do not wish to turn this into a fight, I was not correcting you I was praising your knowledge and contributing.
You seem very defensive however.
I was not questioning your knowledge why are you questioning mine, particularly when you make assertions that conflict with my hard won experience in industry working at all levels. And I have a significant amount of experience from the very highest level to the lowest.
I was simply pointing out that the OSI layers exist in the same way that laws exist. Not all manufacturers comply with the OSI and not all people comply with the law, but people should comply with the law and companies should comply with OSI.
But where they do comply with OSI and where they do comply with good practice, they apply error checking when transporting data between layers. This cannot be denied. If you were to deny this I would be forced to question your knowledge and experience.
It could be the case that you are very knowledgeable academically but educationalists do not know everything and in the real world academic knowledge sometimes falls short.
Error checking between OSI layers exists, it has not disappeared simply because it is not always applied by manufacturers who chose not to implement it.
Spanning of course exists where devices span layers error checking between layers is then unnecessary.
And simply because many manufacturers chose not to implement it does not mean we should not implement it. We should not abandon laws just because lots of people ignore them.
Now for a real world example of implementation of the OSI 7 layers and implementation of error checking that destroyed a businesses ability to operate.
I was tasked some years ago to resolve a catastrophic failure in a company that prevented any of the companies home workers from connecting and functioning.
The problem turned out to be caused by the error checking in the device at layer 1.
It was finding errors in the data every few seconds and then forcing a disconnect and resend of the data. Nobody could work and the company survival was threatened. This was a tier 1 catastrophic failure and every engineer assigned to it globally had failed to resolve it despite escalations and extensive work on it.
Nobody had the guts to turn off the error checking because it was bad practice to do so as it was mandatory to have error checking at the OSI layer boundary for this equipment.
I had to insist that we break the rules and disable error checking, I got my way because the fault could not be resolved in any other way and the equipment with error checking turned off, performed faultlessly.
I was told YOU CANT TURN ERROR CHECKING OFF everyone was trained to never break this fundamental rule it was a law. This is the difference between academic knowledge and real world knowledge. The world is not black and white sometimes we have to break rules that teachers say must not be broken to get results.
I do believe that you are very knowledgeable but the way you have approached your posts suggests a very detailed academic knowledge that does not always work in the real world.
You are not the only very knowledgeable person in the world. And when I add information to a thread that you have commented on that does not mean that I am questioning or doubting your knowledge or abilities.
As for your assertion that brevity is the key to understanding, I disagree brevity usually means leaving something out.
Technical subjects demand full and complete descriptions and answers or the entire story is not told.
And with the brevity comes lack of knowledge, and this lack of knowledge sometimes causes wrong decision making because we dont have all the facts..
In chat rooms nobody likes walls of text, but in technical descriptions walls of text are required or important information is missing.
The only way forward to escape this issue would be to refuse to speak technically in chat rooms, and that means we just chat pointlessly, I dont want that.
I refuse to miss out pertinent information simply because the reader cannot be bothered to read a full and complete text.
If they cannot be bothered to read a technical description in its entirety the fault lies with the reader and not the author.
Less is not more here, less is less.
I agree with J.Jericho your post is exceptionally well presented accurate and clear.
I would add however that while you are quite correct when you say that Memory in consumer hardware doesn't have error correction codes (ECC) but more expensive server hardware generally does, - we do however end up with error detection and correction due to the OSI 7 layers and the way it is implemented.
Typically hardware and software manufacturers include error checking at the OSI boundary their equipment communicates across.
The end result is error checking of the function of the consumer device by the back door.
This can and sometimes does lead to excessive and repeated error checking.
For example when sending information from the application layer in source machine 1 across a comms link to the application layer on destination machine 2, the data traverses 14 OSI boundaries so if we error check at every boundary transition, we error check the same data 14 times.
While in the classroom and the lab, error checking is mandated and always held to be a good thing, in the real world this excessive error checking has been known to kill the data transfer and cause catastrophic failures.
I have personal experience of this.
excellent post you are absolutely right
Put simply, a cmos logic gate is in essence a flip flop based upon a 3.5 volt threshold
The problem is there is no such thing as a logic circuit outputting logic 1 or logic 0.
There are instead analogue circuits that output a range of voltages that we can force to behave like logic circuits.
If the voltage output of the logic circuit or gate is higher than 3.5 volt it is a logic 1 if it is less than 3.5 volts it is a logic 0. 3.5 volts has been adopted in CMOS as the threshold for logic 1
CMOS gate values are supposed to be either 0 volts or 5 volts for those logic states, but they rarely are because they are analogue circuits trying to behave like digital circuits, so we use threshold values to detect the logic states.
In realty there could be any value between those voltages and given losses in the circuits a logic 1 of above 3.5 volts could be pulled down below 3.5 volts due to a voltage sink of some kind and become a logic 0.
Then the logic has changed when it should not have changed and the computer program has then malfunctioned.
The logic state should really be unknown if the voltage in the gate output voltage is less than the 3.5 volt threshold set for logic 1, but almost 3.5 volts.
When does a 0 become a 1 at what voltage 3.50 3.49. 3.48
What happens if a gate output is at the 3.5 volt threshold and rippling slightly between 3.4 volts and 3.6 volts.
This is the computer logic equivalent of panicking.
The detection of 3.5 volts or more causes downstream logic gates to flip due to logic 1 being detected but less than 3.5 volts causes the downstream logic gate to a flop to logic 0
There is a period of indecision due to logic gate ripple where the gates are all in an indeterminate state during the ripple and they take a period of time to settle into the final state of correct logic.
The larger the number of gates the longer it takes for the ripple through and to settle
This period of settling in which the logic gates are performing illogically grows as circuits are miniaturised and with greater miniaturisation comes greater difficulty of determining logic 1 in any single gate.
The hope is of course that all logic gates will settle correctly to the correct logic value however as you quite rightly point out losses due to miniaturisation plus abnormalities can disrupt the logic gate performance.
We cannot wait forever for logic circuits to settle so we make assumptions that within a set period of time all rippling will have ceased and logic circuits will have reached the correct values.
A typical 4 logic gate cluster takes 300 ps to settle, we need not explore the ramifications of this, it is not much time, 300 trillionths of a second, but compounded by the sheer total number of logic gates in a system means that it actually can take a significant amount of time for all rippling to cease in all the gates.
This is one of the limiting factors on the growth of computer systems
After rippling has ceased we then read the output of all the gates allowing time for all rippling to end plus x, - a safety margin time. We cannot risk reading a logic gate output if it is still rippling.
But what happens when a passing neutrino enters a logic gate and trips a gate from logic 1 to logic 0 during the safety margin wait state.
Rippling must begin again. This passing neutrino could also by the way on the hard drive stab a binary value in a stored program changing the program forever and preventing the computer from obeying the designed code, because the code has now changed and is no longer as programmed.
We are bombarded by Neutrinos every day and most pass through us harmlessly but when a neutron interacts with the semiconductor material, it deposits charge, which can change the binary state of the bit and Neutrinos can only be stopped by lead or concrete so computers are vulnerable to them in stored programs in RAM in ROM and in the data carried on the bus.
We may then finish up reading the logic output during a new rippling state caused by a neutrino attack. We then have a 50 50 gamble that the logic is in error.
Exactly as you describe and pointed out J.Jericho.
And exactly as you suggest greater miniaturisation makes this effect more likely as gates miniaturise more, but neutrinos do not miniaturise and their effects could become more profound for logic circuits as we miniaturise more over time.
The more logic gates you have the more opportunity there is for Neutrino disruption.
This is a very long post I know but we are deep diving here into complex areas that usually are hidden from the general public and largely unknown to them.
The present state of miniaturisation of logic is called VLSI very large scale integration and it brings issues that are difficult to address relating to robustness and reliability of these very large scale integrated circuits.
I am comfortable J.Jericho that you already know all of this and more besides, possibly more than I do on this topic given your excellent posts.
We are in examining this topic very close to the cutting edge of computing and its far reaching implications for the future of computer systems as we plod ever onwards down the road of greater miniaturisation and with greater miniaturisation comes greater risk of failures due to the very miniaturisation that we seek.
I intended this post to illustrate exactly why your post is technically correct and on topic. I do not intend to slight other members very few people understand this stuff, and we can only know what we know.
Most members know lots of stuff I do not know and cannot hope to know and I want to make it quite clear that not knowing this deeply technical and difficult subject is no reflection on them at all.
Moores law says computer gates double in complexity every 2 years. I would suggest that Risk doubles along with that.
I am not at all surprised that most people do not know all this it is the province of micro electronics engineers largely hidden from the public and there is no need for the public to know these things only electronics engineers chip designers and systems designers need to know this stuff.
What does surprise me on a daily basis is how reliable computers are given their huge vulnerability to errors and mishap.
Computers seem to me to fly less like an F15, and more like a bumble bee, they just manage to get there despite being a bit poor at flying.
I dont disagree with you in principle but the issue is, more than half the computers in the world are carrying errors many are carrying several errors many of which may take years to appear.
The argument here that we are discussing is that the computer will always obey the program that the human has inputted because thats what computers do. They execute lines of code as programmed.
Except they dont, that is a wrong statement. They simply appear to execute lines of code correctly for most of the time.
Never in the history of Intel computing have computers ever executed the code the programmer inputted and compiled.
The computer makes a copy of the code in RAM memory and executes that copy. It is fairly common for there to be a corruption in that copy, so you have a fail right there.
The truth is even if the copy is a good one, the computer does not execute the program correctly every time due to other errors that exist elsewhere.
And the computer cannot spot most errors it has to rely on checksums which dont reveal much and errors can cancel out in checksums.
There are several places where corruption of the program is likely to occur.
I can name several kinds of errors and in every case the program the human inputted is not followed by the computer.
FYI I worked as a second line support and a third line technical support engineer with Hewlett Packard working in Intel products SUN systems, coms and telecoms.
I designed and built several servers, designed and built a clustered supercomputer on Linux, I programmed distribution software for Hewlett Packard, I acted as an advisor to technical support companies and worked with Radio Networking companies.
I can tell you now computers do not always execute programs correctly due to fundamental issues in the way they are constructed configured and operate.
Check out Real Time computing and the ADA system
Ada is a strongly typed programming language that enjoys widespread use within the embedded systems and safety-critical software industry.
ADA solves some of these issues that plague intel systems
Computers were too unreliable for use on the Moon shot and NASA had to commission a new real time system that guaranteed that their computer would execute code correctly. NASA invented real time computing.
They would not have had to do that if computers executed code reliably and correctly but they didnt and still dont.
The only systems that do execute code correctly are real time systems, these are mission critical systems like air traffic control systems where lives are at risk or lost if code is not executed correctly.
I do know what I am talking about here.
Why do you think the most common fix for a computer is and always has been "turn it off and on again", it is because they often dont function correctly and they dont execute code correctly and turning them off and on again refreshes the OP sys the RAM images etc etc etc, for a few hours until the next error hits.
So does a computer execute code exactly how it was told to do in the code.
No - no way. Sometimes they do sometimes they dont.
I would draw your attention to computers acting outside their programming.
This has been happening for at least 30 years and continues to happen.
Typically published software which contains code is tested and then released to the public for sale.
The public uses the software and then some users discover undocumented features not mentioned in sales documentation or advertising or operating manuals.
Some of these undocumented features allow users to enjoy benefits they did not expect, others prevent the correct operation and are called bugs.
Both types are later eradicated from the software by the manufacturers in later releases or updates.
For this reason since it is both illogical and wrong to claim that all activity of the software/hardware system is always controlled by the programming of the system, we must accept that computer systems do not follow their programming in every case.
To say that programming is directed by humans, must be wrong or undocumented functionality that gives unexpected results could not exist.
The existence of software/hardware systems that do not obey human programming, proves beyond a shadow of a doubt that computers do not always obey human commands.
The cause of this is the complexity of coding. If code amounts to less than 10,000 lines of code then it is fairly simple to error check the entire code with all possible variables and for all possible eventualities.
It must be said that a program that takes 2 years to code, typically takes 2 years to error check fully and reliably.
The largest programs today contain more than 1 billion lines of code and these immense programs cannot possibly be thoroughly tested to eradicate coding errors before they are released or they would never come to market.
Microsoft long ago stopped fully testing their code and now rely upon customer complaints to reveal errors in their software.
To suggest that computers always obey human created code is just plain wrong.
In a perfect world where we never have errors in code, that would be possible. But we do not live in a perfect world and programmers are fallible and make errors and computers malfunction due to those errors.
More than this an entire industry has grown up to exploit errors in code that allow hackers to gain access to systems where the programmers supposedly coded traps to prevent hacking.
This is a symptom of the huge number of error filled software applications around today.
I would suggest that more applications disobey their programming due to errors than applications that obey their programming.
Or are we going to for example say that a missile silo that fires its missiles due to an error in a computer program is performing faultlessly because it followed its programming when its programming was never designed to fire its missiles in that error state.
There must be a better measure of computer malfunction than this.
I cite
First American Financial Corp Data Leak
Quora Data Breach
Cambridge Analytica Scandal
Marriott International
The University of California, Los Angeles (UCLA) Data Breach
These are just the first 5 out of dozens of the most serious computer malfunctions.
These are all software systems that did not perform as programmed and allowed either hackers to steal or corrupt data or simply errored and destroyed data.
If these computer systems were simply obeying the programming then these data breaches would not have happened.
So lets not kid ourselves computers do not always obey their programming.
The thing to remember is that nobody programs insects, animals or human beings. They all consist of a neural net of greater or lessor sophistication that programs itself by what we call learning.
The main reason that computer logic gates cannot program themselves is because they not yet neural nets of sufficient complexity to do so.
Each neuron in a human brain is connected to 7000 other neurons. This makes the synapse count in a human brain in excess of 600 trillion.
If we assume a logic gate to be a synapse which is actually difficult to argue, but cut me some slack here, then we need 600 trillion logic gates in a computer to rival human brains.
In computers we speak of around 100 million logic gates as available today.
I trillion is a million million
If a computer today has 100 million logic gates and a human brain has 600 trillion neural connections, then a human brain will be around 6 million times more powerful than a modern computer.
Once computers become 6 million times more powerful than they are today then true AI should become commonplace.
In the meantime they are simply programmable adding machines and administrator is completely correct.
According to Moores Law computers double in power every 2 years, so we can use this to compute when we expect to see computers with 600 trillion logic gates.
I have done the math it will take according to Moores, 44 years for computers to hold the 6 trillion logic gates to challenge the human brain for computational power.
I dont think we are up against it quite yet, but just wait the year 2067 is not that far away
I have never had a problem finding plenty of places to play.
First of all equipment, the smallest under seat cabin bag for most international carriers can accommodate a standard trumpet.
If that doesnt work cornets fit in the smallest of bags.
I have travelled abroad internationally in passenger jets with a trumpet stowed in a cabin bag under the seat in front of me with no difficulties at all.
Dump the trumpet case, take a soft cabin bag.
This means you will never have to compromise on your equipment while travelling.
When you get to your destination, there are always parks woods streets back alleys I have never had any difficulty finding a dozen places within minutes of where I am accommodated.
The only place there has ever been any issue or restriction
was a Mall. And their heads were so far up their rear ends they outlawed their customers from singing or humming to themselves in the Mall.
The problem you will most likely face is turning people down who want you to play for them.
The main problem I expect you to suffer is there being too many places to play and practice in rather than there being too few.
What I have seen in the past in forums is trumpet players who insist on buying a pocket trumpet when there is no need, and then they insist upon playing it in their hotel room and suffering intonation issues from unfamiliar gear and complaints from the hotels guests.
Take a trumpet with you, walk to a park, alley, street, and you will be astounded by the good will and respect the citizens there who are staved of live music, will give you for your efforts.
What better high is there for a musician than spontaneous applause and cheers for what amounts to doing a bit of practice to preserve your chops.
One thing I would suggest is learning a few simple tunes that you can play easily that local audiences might like, but I suspect you dont need that being an experienced player.
People want to hear you, so let them and have a ball doing it.