........................................................................
Assclown Mini TacosAnd yes, this was prompted by that stupid thread that Spanky posted.
........................................................................
SockpuppetThat’s absurd. And it’s clear you don’t know how to use AI to improve efficiency. It’s like if you said 20 years ago that finding data using Google instead of a card catalog wasn’t valid.
........................................................................
Assclown Mini TacosSockpuppet wrote:
That’s absurd. And it’s clear you don’t know how to use AI to improve efficiency. It’s like if you said 20 years ago that finding data using Google instead of a card catalog wasn’t valid.
All this shyt is about as unreliable as Wikipedia - if you use it for anything more than finding references to actual sources you are just showing that you are an idiot.
Posting a link to the hallucinated shyt that it spews as if it were some sort of oracle is simply showing that you are a BRAINDEAD FUQING QUNT.
........................................................................
JurijknotpeemusWhat lilm
........................................................................
SpankyAssclown Mini Tacos wrote:
All this shyt is about as unreliable as Wikipedia - if you use it for anything more than finding references to actual sources you are just showing that you are an idiot.
Posting a link to the hallucinated shyt that it spews as if it were some sort of oracle is simply showing that you are a BRAINDEAD FUQING QUNT.
Again, you are proving you are AI-illiterate.
ChatGPT is incredibly good at summarizing information. When you want a summary of information, you can research and prepare it yourself, or you can be efficient and have an AI do it for you. Is it possible the AI might make a small error? Of course. But so can humans. The question is whether or not AI gets the gist right and is as accurate, or more accurate, than a human. Humans are not perfect, and AI is not perfect. If a human is 95% accurate and takes two hours to do something, and AI is 85% accurate and takes two seconds to do something, then AI is the obvious choice for everyday use. When things are critical, you'll want to use more caution.
Go ask ChatGPT about the functioning of the solar system. It will be very accurate. Even if it makes a small mistake - say it invents a URL - it will be overall correct about the known details of how the solar system functions.
My use of it was similar: summarize the general consensus of a subject that has been scientifically examined.
........................................................................
Assclown Mini TacosSpanky wrote:
Again, you are proving you are AI-illiterate.
ChatGPT is incredibly good at summarizing information. When you want a summary of information, you can research and prepare it yourself, or you can be efficient and have an AI do it for you. Is it possible the AI might make a small error? Of course. But so can humans. The question is whether or not AI gets the gist right and is as accurate, or more accurate, than a human. Humans are not perfect, and AI is not perfect. If a human is 95% accurate and takes two hours to do something, and AI is 85% accurate and takes two seconds to do something, then AI is the obvious choice for everyday use. When things are critical, you'll want to use more caution.
Go ask ChatGPT about the functioning of the solar system. It will be very accurate. Even if it makes a small mistake - say it invents a URL - it will be overall correct about the known details of how the solar system functions.
My use of it was similar: summarize the general consensus of a subject that has been scientifically examined.
No, you are showing that you have sucked up all the hype and run with it. These systems do not, and cannot "understand" anything or "think". Can they be useful for generating inferences? Yes - but in the end they are just taking up all the stuff they have sucked in and diced up then reassembled in an order that seems likely given the way they appeared in the source material.
They don't have any idea of "truth" - they just have a fairly good idea what it would LOOK like.
The example you used is especially absurd because intelligence research is a third-rail subject and nobody that wanted to carry on working would ever publish a paper that contradicted the "it's all environmental" thesis because to do so would be the end of their career and hence the slop machine will just dice up all this prefiltered material and then output something that - amazingly - looks like all the stuff it sucked up in the first place.
AI cultists are incredibly naive.
........................................................................
Tom LeykisSpanky wrote:
Again, you are proving you are AI-illiterate.
ChatGPT is incredibly good at summarizing information. When you want a summary of information, you can research and prepare it yourself, or you can be efficient and have an AI do it for you. Is it possible the AI might make a small error? Of course. But so can humans. The question is whether or not AI gets the gist right and is as accurate, or more accurate, than a human. Humans are not perfect, and AI is not perfect. If a human is 95% accurate and takes two hours to do something, and AI is 85% accurate and takes two seconds to do something, then AI is the obvious choice for everyday use. When things are critical, you'll want to use more caution.
Go ask ChatGPT about the functioning of the solar system. It will be very accurate. Even if it makes a small mistake - say it invents a URL - it will be overall correct about the known details of how the solar system functions.
My use of it was similar: summarize the general consensus of a subject that has been scientifically examined.
Where did our "Sheriff of Everything", copy and paste that TOTAL BULLSHT from? Sounds like the Lunatic Democrats are at it again.
........................................................................
SpankyAssclown Mini Tacos wrote:
No, you are showing that you have sucked up all the hype and run with it. These systems do not, and cannot "understand" anything or "think". Can they be useful for generating inferences? Yes - but in the end they are just taking up all the stuff they have sucked in and diced up then reassembled in an order that seems likely given the way they appeared in the source material.
They don't have any idea of "truth" - they just have a fairly good idea what it would LOOK like.
The example you used is especially absurd because intelligence research is a third-rail subject and nobody that wanted to carry on working would ever publish a paper that contradicted the "it's all environmental" thesis because to do so would be the end of their career and hence the slop machine will just dice up all this prefiltered material and then output something that - amazingly - looks like all the stuff it sucked up in the first place.
AI cultists are incredibly naive.
Nowhere did I say that AI "understands" or "thinks" or knows "truth" from fiction. Nowhere. It is something you totally fabricated. Nice fuqing strawman. What I said, accurately, was that AI is good at summarizing topics.
As for your assertion that The Real Truth is being Hidden by Them: that's not data. That's a conspiracy theory.
........................................................................
SpankyTom Leykis wrote:
Where did our "Sheriff of Everything", copy and paste that TOTAL BULLSHT from? Sounds like the Lunatic Democrats are at it again.
What's bullshyt about it, Tom?
........................................................................
Assclown Mini TacosSpanky wrote:
Nowhere did I say that AI "understands" or "thinks" or knows "truth" from fiction. Nowhere. It is something you totally fabricated. Nice fuqing strawman. What I said, accurately, was that AI is good at summarizing topics.
As for your assertion that The Real Truth is being Hidden by Them: that's not data. That's a conspiracy theory.
You posted a ChatGPT summary and then insisted in a shrill and angry way that it was The Truth (tm) and that anyone that didn't just blindly accept it was wrong.
And you are clearly entitled to your opinion - but if you talk to anyone that works in psychology they will tell you that intelligence research is something you want to stay far away from because the risk of coming up with "unacceptable" conclusions is simply too high to accept.
Rather like the way a medieval version of an LLM would 100% insist that God was real and if you doubted it you would be going to hell because anyone that expressed a contrary opinion would end up being murdered, albeit less extreme.
........................................................................
SpankyAssclown Mini Tacos wrote:
You posted a ChatGPT summary and then insisted in a shrill and angry way that it was The Truth (tm) and that anyone that didn't just blindly accept it was wrong.
And you are clearly entitled to your opinion - but if you talk to anyone that works in psychology they will tell you that intelligence research is something you want to stay far away from because the risk of coming up with "unacceptable" conclusions is simply too high to accept.
Rather like the way a medieval version of an LLM would 100% insist that God was real and if you doubted it you would be going to hell because anyone that expressed a contrary opinion would end up being murdered, albeit less extreme.
The ChatGPT link I provided was an accurate summary of the data available. I never "insisted in a shrill and angry way that it was The Truth." This is, again, something you have made up. A total fabrication.
........................................................................
Tom LeykisSpanky wrote:
What's bullshyt about it, Tom?
Another reason I don't answer our resident liar's stupid questions.
........................................................................
Assclown Mini TacosSpanky wrote:
The ChatGPT link I provided was an accurate summary of the data available. I never "insisted in a shrill and angry way that it was The Truth." This is, again, something you have made up. A total fabrication.
"Read the OP. It’s not explained by genetics. Read the OP. Read the OP. Read the OP."
Sounds like a shrill and angry assertion that the ChatGPT slop was "The Truth" and should not be challenged to me.
........................................................................
SpankyTom Leykis wrote:
Another reason I don't answer our resident liar's stupid questions.
So you can't articulate your position. Got it. Low IQ scum.
........................................................................
SpankyAssclown Mini Tacos wrote:
"Read the OP. It’s not explained by genetics. Read the OP. Read the OP. Read the OP."
Sounds like a shrill and angry assertion that the ChatGPT slop was "The Truth" and should not be challenged to me.
Yes, reading the OP was important as it contained answers to questions being asked. So, "sounds like" isn't going to cut it as a way for you to get out of fabricating things I haven't said.
Please note you've told several lies about what I have said, you've engaged in several logical fallacies, and yet you still haven't countered a single item from the summary.
Sorry that the data doesn't match your feelings.
........................................................................
Assclown Mini TacosSpanky wrote:
Yes, reading the OP was important as it contained answers to questions being asked. So, "sounds like" isn't going to cut it as a way for you to get out of fabricating things I haven't said.
Please note you've told several lies about what I have said, you've engaged in several logical fallacies, and yet you still haven't countered a single item from the summary.
Sorry that the data doesn't match your feelings.
We are clearly going to have to agree to disagree about this. You apparently trust the outputs of LLMs and I don't.
........................................................................
SpankyAssclown Mini Tacos wrote:
You apparently trust the outputs of LLMs and I don't.
Which demonstrates that you are AI-illiterate. It is simply not an area that you understand. You are ignorant of it. You could learn, but it is clear you are biased against AI and are not interested in learning how to use it as a tool just like any other tool. You have an algorithm aversion.
https://en.wikipedia.org/wiki/Algorithm_aversion
........................................................................
SpankySpanky wrote:
Which demonstrates that you are AI-illiterate. It is simply not an area that you understand. You are ignorant of it. You could learn, but it is clear you are biased against AI and are not interested in learning how to use it as a tool just like any other tool. You have an algorithm aversion.
https://en.wikipedia.org/wiki/Algorithm_aversion
BTW - I do not trust AI blindly. I have learned to use it. I am skilled at it. I have seen it produce perfect results that I validated through my own knowledge or through verification. And, of course, I have seen it make mistakes. I know where it excels and where it struggles. One area where it is, effectively, the gold standard is in summarizing technical or scientific information.
........................................................................
Tom LeykisSpanky wrote:
So you can't articulate your position. Got it. Low IQ scum.
Suit yourself.
........................................................................
Assclown Mini TacosSpanky wrote:
Which demonstrates that you are AI-illiterate. It is simply not an area that you understand. You are ignorant of it. You could learn, but it is clear you are biased against AI and are not interested in learning how to use it as a tool just like any other tool. You have an algorithm aversion.
https://en.wikipedia.org/wiki/Algorithm_aversion
You seem to be assuming I don't use this stuff, and you are wrong - I do, I just have about as much faith in the alleged "facts" it generates as I would have for the "facts alleged" in a legal pleading. Hence I get very nervous when people start acting like it's some sort of inerrant perfect oracle.
For example I've seen in get summaries for scientific papers horribly wrong (although that might be down largely to how turgid and terrible most academic prose is) - I've seen repeated cases of LLMs hallucinating APIs when asked to generate code. Sure, if you spend enough time nudging them in the correct direction you can get useful output - but that's a problem in itself, since you can also do the opposite and gaslight an LLM into generating total nonsense.
I've personally found it much more useful for doing things that can be verified - so you ask it to generate code and then get it to fix the errors until you eventually produce something that both compiles and produces the correct output - and this can certainly save you a lot of time writing boilerplate.
My issue is more that the ab-initio output produced by these models seems in my experience to have a large risk of containing factual errors and once you start prompting it to correct them it's not really the output of the LLM anymore - it's the LLM illuminated (or possibly contaminated, depending on your point of view) with the user's perspective.
........................................................................
Previous | First | 1 | Last | Next