Quantcast

On the upcoming implosion of AI

binary visions

The voice of reason
Jun 13, 2002
22,184
1,324
NC
That was a lot of words.

I mean, he's not wrong about a lot of it. The hype machine is at full tilt, and the big players are burning through cash to over-promise and under-deliver.

But that's true for essentially every technology ever. LLMs are way better than the author gives them credit for and they are extremely useful for some types of tasks. We use them at work, I use them personally, and they can be powerful tools. Again, like essentially every technology ever, some companies are going to rush to say, "how can we use this tool to solve a problem?" rather than, "what problem do we have that this tool can solve?"

This isn't new, and many of those companies will die.

As it turns out - surprise surprise - the hard problems are hard, and they're taking a while to solve. They may not even be solvable with the current approach; there are limits to a statistical model of existing writings. And there is the giant thorny issue of how you respect copyright while needing to shovel billions of words and seconds of video into the models - to me, that's a far more intractable issue than most of the technology gripes that the author goes on about (though he does acknowledge it).

But to me this doesn't feel much different from the web, or smart phones, or voice recognition, or internet of things, or whatever. New Tech arrives, everyone scrambles to stuff New Tech into their Things so they can advertise that their Things are now Moar Better with 110% New Technologies Added. Eventually the pendulum swings back as the excitement dies down. Rinse and repeat.

But as a tool for distilling crazy amounts of information, or being able to ask for answers using natural language, nuance and context, or for finding patterns in huge datasets, LLMs are fucking awesome. It's probabilistic so you need to verify the output, but I can ask a very specific question using specific context and conditions, and 9 times out of 10 the answer will at least give me the right places to look.
 

gonefirefightin

free wieners
In the little bit of AI adoption I have used to and explored as of late I cant quite see it being a stand alone solution for really anything but more of a rough implement used for very specific tasks and automation but still requires a significant amount of learning to streamline.

I've used Meta's AI with mixed results using the Raybans connected to the phone. It can count how many boards are in a unit of lumber to a fairly accurate degree (+—10%). It can estimate a pile of dirt/aggregate with around the same results as long as the pile isn't over 10 or 15 yards. Otherwise, it just seems to be a hands-free gimmick instead of using Google or traditional methods.

I have been using it in my accounting for the LLC's and it is terrible, its main crux is categorization and receipts, ended up turning it off except for reconciliations.

I am not sure what its implosion will leave in its wake, but I could guess the financial/trading industry will lose some significant ground and money in its wake.

If chat GPT is the best it has to offer at this point I think its most used feature is art and graphics design
 

binary visions

The voice of reason
Jun 13, 2002
22,184
1,324
NC
@gonefirefightin I think this is more an issue of use cases.

You've fed it extremely squishy information (a photo of something is, in computer terms, about the least helpful information imaginable) and asked for specific results. In many respects, these were tasks that were extremely difficult and would have required specialized, purpose-built applications to handle just a few years ago. "Estimate the number of boards in this pile" required image capture > scene identification (to handle size estimates) > object identification > database of object properties. The LLM isn't tuned for anything, so the fact that it can create this estimate at all is fairly impressive - it's hard to emphasize how difficult it is to build a tool that will accept arbitrary inputs, and generate arbitrary (user-directed) outputs.

This is just party tricks, though. Those party tricks are really at the core of what JBP's article is talking about - monetizing party tricks is hard. Just look at voice assistants - Amazon has set fire to billions of dollars because nobody trusts Alexa to order more laundry detergent, what they really want is for Alexa to dim the lights or set a timer or do basic arithmetic, because it's all low stakes and immediate feedback. "Summarize this email for me" is a party trick. It's neat but I'll never make a computer or phone purchasing decision on it. So right now companies are stuffing this tech into things and hoping that if it just shows up enough places, it'll become an indispensable part our computer experience. I just don't know what the killer app is at the moment.

The magic happens when you tune a model to produce certain types of outputs, or train them on certain types of data. I believe we're going to see some really interesting developments in scientific fields due to the ability of LLMs to parse amounts of data that have been historically impossible to handle, and distill patterns out of them.