TrumpetBoards.com
    • Register
    • Login
    • Search
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    1. Home
    2. Jolter
    3. Posts
    J
    • Profile
    • Following 0
    • Followers 1
    • Topics 11
    • Posts 150
    • Best 50
    • Controversial 0
    • Groups 0

    Posts made by Jolter

    • RE: Looking for Besson Meha piston (Kanstul)

      @jolter said in [Looking for Besson Meha piston (Kanstul

      (Thanks for the tip, but right now, I don't have the money to spend on a valve job, much less on a new Meha...)

      Today was the day I finally posted posted this fine Meha off to a workshop for valve replating.

      I've sort of gotten into a repair hobby myself, lately, but this particular job will be left to a professional. The horn is too nice for me to ruin it with a botched valve job!

      Will post with pictures once I get it back.

      posted in Repairs & Modifications
      J
      Jolter
    • RE: Moderator absent...

      @barliman2001 said in Moderator absent...:

      The whole thing is organized by a cello player under the brand name of Symphonic Holidays, www.dacapo-travel.eu.

      What a lovely idea for a business!

      I hope you'll have fun, and I hope to go myself one day.

      posted in Announcements
      J
      Jolter
    • RE: Bots are getting scary

      @trumpetb
      Hey, you just gave me an idea. How about I take your advice and never read any of your posts in the future? Then you won’t have to read my replies and I won’t have to parse huge walls of extraneous text.
      That should make us both enjoy this forum a lot more.

      Please don’t bother to respond to this message as you would only be shouting into the void.

      posted in Pedagogy
      J
      Jolter
    • RE: Bots are getting scary

      @trumpetb said in Bots are getting scary:

      @Jolter

      I agree with J.Jericho your post is exceptionally well presented accurate and clear.

      I would add however that while you are quite correct when you say that Memory in consumer hardware doesn't have error correction codes (ECC) but more expensive server hardware generally does, - we do however end up with error detection and correction due to the OSI 7 layers and the way it is implemented.

      Typically hardware and software manufacturers include error checking at the OSI boundary their equipment communicates across.

      The end result is error checking of the function of the consumer device by the back door.

      You’re overcomplicating things, as usual. There are always more details that we could add to any answer, but sometimes brevity is the key to getting the audience’s understanding.

      Of course, the error correction of the various network layers was what I was getting at when I wrote about network retransmission. I just glossed over some details.

      One more thing, the OSI model is not implemented by any of the major OSs in the market today. Windows, Linux and Mac OS each only implements the bottom four layers, in the form of the TCP/IP stack. I don’t know if the OSI model was ever fully realized, even in the old Unixen. So better update your knowledge, it’s only been four levels down and four up, since the Internet came about.

      posted in Pedagogy
      J
      Jolter
    • RE: Bots are getting scary

      @j-jericho said in Bots are getting scary:

      From a layman's perspective, does not electronic communication rely on electric pulses perfectly following their paths with the precise voltage and duration? And considering the physical size of the hardware, is it possible that electronic pulses can, due to extreme miniaturization, sometimes follow a path not intended by its designers, creating unexpected errors which can be difficult to trace or replicate, yet nevertheless cause a program to function abnormally?

      Yes, it is possible. It happens from time to time that a bit "flips" in a memory chip, causing a 1 to become a 0 or vice versa. It's usually attributed to cosmic radiation.

      Another possibility is that a bit becomes flipped in transit, during a network transmission, by randomly interjected electrical fluctuations in wiring, or cosmic interference with the transmitting or receiving chip set.

      The good part is, most of these occurrences will either:

      1. (best case) cause an error or exception to be thrown in the running program, because the corrupted data is no longer to be interpreted, or if it was program code that got corrupted, the resulting instruction is invalid. Normal computers will fail in a "loud" way if this happens, and normal program error handling should ensure that nobody gets killed as a result. Program crash is a common symptom.
      2. cause a computation to continue executing normally but with incorrect data. In a fault-tolerant system such as aeronautics, this can be discovered and corrected for, by doing the same calculations in redundant systems. Normal computers will not find or correct for this type of error.,

      There are statistics from the big cloud computing players on how common bit flips are. IIRC, you won't see flips daily or weekly per computer, but you can have multiple memory bit flip incidents during the multi-year life of an average computer. Due to good modern programming practices, these very rarely become a problem to the user.

      Error correction techniques are employed to try to detect or correct bit errors. In communications, bit errors are fairly common and the communications protocols will contain contingencies like check-sums and automatic re-transmission of failed packets.

      Memory in consumer hardware doesn't have error correction codes (ECC) but more expensive server hardware generally does. Anecdotes abound but I've heard that servers with big memories will register a couple of correctable events a week, which the ECC memory handles automatically.

      posted in Pedagogy
      J
      Jolter
    • RE: Bots are getting scary

      @j-jericho said in Bots are getting scary:

      https://futurism.com/newspaper-alarmed-chatgpt-references-article-never-published

      Think of each GPT model as a low-resolution JPEG picture of the internet. Just like a jpeg, it’s a lossy encoding. It’s encoding “everything that was ever written on the Internet” into a finitely-sized neural network. It will tend to get broad strokes right but just like when you zoom too far into a very badly compressed photo that you saved off the Internet in 1997, you won’t be able to see what was originally there. If you try to “upscale” it (demand too much detail), the model will oblige but each detail risks being a fiction.

      posted in Pedagogy
      J
      Jolter
    • RE: How about a "Random Meaningless Image...let's see them string"?

      alt text

      alt text

      posted in Lounge
      J
      Jolter
    • RE: How about a "Random Meaningless Image...let's see them string"?

      alt text

      posted in Lounge
      J
      Jolter
    • RE: How about a "Random Meaningless Image...let's see them string"?

      alt text

      posted in Lounge
      J
      Jolter
    • RE: How about a "Random Meaningless Image...let's see them string"?

      alt text

      posted in Lounge
      J
      Jolter
    • RE: Bots are getting scary

      The important thing to keep in mind is that GPT is basically an advanced autocomplete engine. It takes whatever string you input and essentially generates the most statistically likely continuation.

      So given that Microsoft seem to have done a poor job of filtering the Bing chatbot’s outputs, you’ll get some funny results. With that in mind, it’s not at all surprising that if you start talking to it like a therapist, it will start coming up with dramatically depressive outputs. Likewise, if you accuse it of being wrong, it will do what people on the Internet do: defend itself rather than admit to a mistake.

      posted in Pedagogy
      J
      Jolter
    • RE: Bots are getting scary

      @ssmith1226 said in Bots are getting scary:

      @j-jericho said in Bots are getting scary:

      @ssmith1226 Does ChatGPT provide references/footnotes with its summaries? When I want generic information, Wikipedia provides it, but usually when I do an internet search, I want to be able to screen the sources, as some have more veracity than others.

      If you ask it, it will provide sources. The below is an example. Remember, Wikipedia is not necessarily accurate either.

      Sources:

      Harbo HF, Kyvik KO. Sarcoidosis: a complex genetic and environmental disease. Genes & Immunity. 2003;4(2):63-70.
      Alho AM, van der Meide PH, Visser LH. Neurological manifestations of sarcoidosis. Sarcoidosis vasculitis and diffuse lung diseases. 2006;23(2):85-90.
      Alsulami Z, Castro-Gago M, Calero-Linares C, et al. Clinical manifestations and therapeutic options in neurosarcoidosis: a comprehensive review. Journal of the neurological sciences. 2017;375:85-93.

      You’d do well to check each of those citations before using them for anything important. ChatGPT is prone to hallucination when you prod it for specifics such as sources. See this short write up from a mathematician:

      https://news.ycombinator.com/item?id=33841672

      I was pretty surprised and happy, because I hadn't had much success with Google. But it turns out that every single one of those references and links are made up. The references don't exist and the links themselves appear to be cobbled together. The last link for example, takes you to a paper on "Grain mixes for subgrade layers" - the title in the url is irrelevant and only the number matters.
      Googling for some of those authors throws up author pages with their publications, and it turns out they've never published papers with those titles!

      posted in Pedagogy
      J
      Jolter
    • RE: Bots are getting scary

      @j-jericho It does not provide any sources. In fact, the model does not “know” where any of the training data came from. All it knows is a bunch of numerical weights between nodes in a graph… Well, that’s what a neural network is if I oversimplify it a bit.

      They fed it all of the text on the Internet up until 2021 so it has a fairly broad corpus as a basis. It can answer questions on many things that Wikipedia has no information on.

      posted in Pedagogy
      J
      Jolter
    • RE: Bots are getting scary

      @ssmith1226 said in Bots are getting scary:

      @jolter
      I agree that the “free public version” that I have access to has limited abilities in certain areas, but imagine what the capabilities will be in 5, 10, or 25 years. In the mean while consider what a human writer can do with this. Here are some extreme examples:

      StevenP
      Put the following text into the style of Chaucer. ***“And so we see yet again how Chatgpt is all style and little substance.

      You gave it some clever prompts, and the results are fun to read, but nobody could read that and actually gain any understanding of how Adams’ teachings were different from Chicowitz’s.

      I’m not saying that AI will never threaten real writers, perhaps it will improve, but in this current implementation I think the only writers who need to fear for their jobs are middling copywriters and corporate PR drones who are already churning out meaningless molasses for the masses. Good riddance to such jobs, I say.”***

      Lo and behold, good sirs and ladies, doth Chatgpt showeth us once more its mastery o'er style, yet lacketh substance in its words. Ye gave it clever prompts, and the results doth entertain, yet none may gain true understanding of how Adams' teachings doth differ from Chicowitz's.

      I doth not say that AI shall ne'er threaten true scribes, perchance it shall improve, but in this current form, I doth believe that only those writers who pen mediocre words and speaketh nought but empty sweet nothings for the masses need fear for their livelihoods. Farewell and good riddance to such tasks, I sayeth.

      StevenP
      Put the same text into the style of JR Tolkien

      Behold, the deeds of ChatGPT yet again reveals its skill in the art of words, yet the depth of its meaning is found wanting. Ye gave it clever prompts, and the results are pleasing to the ear, but none shall find true understanding of the distinctions betwixt the teachings of Adams and Chicowitz therein.

      I do not say that this device of Men shall ever pose a true threat to the craft of true scribes, for perchance it shall improve. But in this present form, let only those who pen mediocre words and speak naught but empty promises to the multitude fear for their livelihood. Farewell and good riddance to such works I say.

      Ok so you managed find a style it didn’t imitate well…
      That’s not how I remember Tolkien, anyway.

      Plus, it reversed the meaning of one of my sentences: “shall ever” should have been “shall never”…

      As for the Shakespeare, I don’t think it has its “doths” and “lackeths” down. I don’t think it’s using proper Elizabethan grammar?

      Anyway, I don’t know if you’re agreeing or disagreeing with me.

      posted in Pedagogy
      J
      Jolter
    • RE: Bots are getting scary

      @ssmith1226 And so we see yet again how Chatgpt is all style and little substance.

      You gave it some clever prompts, and the results are fun to read, but nobody could read that and actually gain any understanding of how Adams’ teachings were different from Chicowitz’s.

      I’m not saying that AI will never threaten real writers, perhaps it will improve, but in this current implementation I think the only writers who need to fear for their jobs are middling copywriters and corporate PR drones who are already churning out meaningless molasses for the masses. Good riddance to such jobs, I say.

      posted in Pedagogy
      J
      Jolter
    • RE: The difference in timbre caused by using additional valves

      @j-jericho said in The difference in timbre caused by using additional valves:

      I hear timbre and intonation changes with different valve combinations, but the timbre changes aren't enough to sound like a different horn. I just write the phenomenon off to "nature of the beast".

      This matches my experience.

      The clearest example might be between F# in first space of the system. Easy enough to alternate between a 2 and a "123+trigger" and get the same note at the same pitch. In this experirement I hear a clear timbre difference. That said, I would not really consider using 123 for making music.

      So, add my vote to "yes I can hear it but does it matter".

      As for why it happens, Rowuk's summary really says it all -- I'd only add that bit about the number of sharp turns that you mentioned yourself, @Trumpetb:

      The change in timbre is based on the cylindrical to tapered proportions as well as the specific partial being played.
      Many modern piston trumpets have been homogenised for a generally even tone.

      When it comes to cornets, I believe there are differences that affect the timbre more than the number of sharp/smooth turns. Notably the length of the leadpipe taper, before going into the cylindrical valve block, as well as the ratio between the narrowest diameter of the mouthpiece and the bell radius.

      posted in Bb & C Trumpets
      J
      Jolter
    • RE: Bots are getting scary

      In case anyone is interested in getting a surface understanding of how these Large Language Models work, I found this article to be quite enlightening:

      https://thegradient.pub/othello/

      Disclaimer: I have some previous professional exposure to machine learning, so I understand some of the jargon here. Article might not be quite as accessible if you're not already into the statistics and math behind ML. Nonetheless, they make a very approachable thought experiment and manage to implement it in reality. The article shows us some properties of how these models are able to be so eerily good at very diverse topics, from constructing a correct computer program to playing a board game with (mostly) valid moves.

      Our experiment provides evidence supporting that these language models are developing world models and relying on the world model to generate sequences.

      posted in Pedagogy
      J
      Jolter
    • RE: Weirdest thing happened

      @trumpetb Look, you’re explaining what pitch is to a board full of musicians. You don’t see how that’s superfluous and devoid of content?

      Another poster alluded to it more subtly and politely in this very thread, I’ll point out.

      As to your explanation of how streamed audio works, I understand it perfectly. I have a degree in computer communications, so I understand each of the concepts you mention, and I can tell that this explanation is pure speculation with no useful conclusion drawn at all. You come off as knowledgeable in IT and music, but your entire contribution falls flat when you don’t have anything actually helpful to post.

      posted in Miscellaneous
      J
      Jolter
    • RE: Weirdest thing happened

      @trumpetb said in Weirdest thing happened:

      @Jolter You chose a topic that nobody knows the answer to, so by your argument, nobody should answer the OP, but the OP wants an answer.

      Here I disagree. If I had asked the same question, I would have felt better if I had received no answer, than if I had gotten a seemingly endless stream of verbiage that culminated in an “I don’t know” at the end.

      At the risk of using too many words for you, if you dont like my answers you have the choice to not read them.

      I do, and in practice that is most often the choice I’m making, to simply scroll by.

      Some people in here seem to value my words even if you do not.

      I can’t speak for others here, but neither can you, I think.

      I happen to believe that members should be able to contribute freely so I would never suggest that you stop writing even though you appear to be suggesting that I stop writing.

      If members dont like my words they should tell me they dont like them, they should not tell me to stop writing.

      Thats my opinion.

      I’m not telling you to stop writing. (Of course, I don’t have any such authority.)

      I’m asking you to please ask yourself a particular question (see above) before posting. It is my strongly held opinion that everyone should ask themselves that question before posting here. Your posting is the only where I have felt compelled to mention it, since to most people it seems to be an obvious thing to do.

      I will not object to, or disagree with, any particular content in your posts to this thread, since in my opinion there is no content to them.

      posted in Miscellaneous
      J
      Jolter
    • RE: Weirdest thing happened

      @trumpetb said in Weirdest thing happened:

      May I speculate
      [...]
      Could this be at the root of this strange behaviour.

      Dude. If you don't know, have you considered that you do not have to be the person who tries to help?

      I get severely turned off by these huge walls of text you keep posting in nearly every thread. Each containing about 0.1 ounces of actual content, wrapped in a metric ton of words.

      When you feel the urge to post something helpful, just, please ... take a little break, take a walk, consider the question "am I adding something of value to the thread, or am I just stroking my ego by trying to seem knowledgeable?"

      Sometimes you do provide insightful commentary, so I'd like to keep interacting with you, but the amounts of verbiage you spew at every occasion really tries a reader's patience. The "ignore" button tempts me every time I see one of your walls of text.

      posted in Miscellaneous
      J
      Jolter
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 2 / 8