Technology will make your life easier

To be fair to language developers, languages have removed a whole lot tedious work from writing software, and reuse of existing code has made programmers way more efficient.

2 Likes

Sute, but ā€œvibe codungā€ is just the latest attemot to say that anyone can be a prigrammer.

3 Likes

X injection is always the thing with the hot new stuff somehow.

I have used the assistants quite a bit. It is fascinating to see what they can do but they do not always end up saving me time.

And as @RogerBW states below it is not easy to specify what code should be written in the first place. I keep citing the example we had right at the start when my partner wanted to use chatGPT to write a script. He has even studied comp.science himself and does know programming basics (his career took him in another technical direction without programming). But he was unable to get the chatbot to write the program I could easily coax from it by prompting.

My local colleagues tend to agree that experts can get a lot of mileage out of these things. And I feel like my knowledge or my ability to gain it quickly has grown immensely.

We discussed that despite seemingly being the opposite and enabling anyone to write code, instead it makes it harder for beginners. How can you gain the actual expertise from machines who try to tell you that you don’t need it? It will probably develop into yet another meta-skill (because going away this tech is not) that needs to be mastered.

Actually, humble has for the first time in ages a tech bundle I might actually buy. Usually I look through the titltes and think ā€œuseless, useless, know already, useless, know already, useless, have a better source, useless, ā€¦ā€ This one about machine learning seems a bit more useful to me somehow.

5 Likes

Yeah it’s a weird time and difficult to predict where these tools end up fitting into people’s workflows most commonly. I tried some copilot bits but turned off the autocomplete as I found it super distracting when I’m trying to write something and it interrupts my thoughts with suggestions. But I’ve got friends who use it a lot to describe what they want and get a lot of output quickly that they can review rather than write.
Will be interesting to see how it affects younger people who have access to it from the start. Seems like a huge disruption to how learning is being done (or avoided)

4 Likes

This!
I am already seeing a lack of appreciation for the actual acquisition of knowledge. I can even see it in myself ā€œI don’t need to know, I can google that.ā€

LLMs are the next step.

But I grew up and went to school before these tools happened.

My cousin who is a teacher already had to adapt to kids having access to google/wikipedia at all times. He has changed his history exams accordingly to be based on understanding much more than knowing some numbers.

ChatGPT will allow to simulate understanding.

I don’t really want to sound old and grumpy. But I probably do when I tell our friends’ teenagers how important it is that they learn to acquire knowledge and train their brains accordingly.

I can get a lot out of chatGPT because I am an expert in my field. They can use LLMs on their way to becoming experts themselves of course but they still have to put in some effort. Their effort will differ from mine.

What can I tell them to make them see the need to become good enough at their ā€œstuffā€ to be able to tell when the LLM makes a mistake? And what type of efforts will/should they prioritize?

3 Likes

Reviewing code is harder than writing it.

Ad it’s already been shown that trusting LLM output inhibits critical thinking skills. (In a Microsoft study that they then tried to bury.)

4 Likes

Do you have a link to the tried to bury it study? Or a reference? I’d be interested in reading more about that.

1 Like

My problem with ā€œAIā€ stuff (other than ethical problems regarding environmental issues, plagiarism, capitalism in general) is that it’s just regurgitating existing information, but in a less useful way.

There’s no learning journey. There’s no surrounding context (documentation, discussion, etc) that will clarify anything you don’t understand.

2 Likes

The only AI application I’ve had be worth a darn is generating a flowchart of a process

Because I asked twelve humans to do it first and they all refused telling me it was too hard

1 Like

Things I use genAI for:
Providing context for industry jargon: So, for example, I may not be sure how to correctly translate a Japanese phrase that reads like ā€œislands of austenite in martensiteā€, the machine translation isn’t great, and I don’t really know what to search for in English: it’s likely that providing some background to the LLM will result in some in-context text that tells me the correct term (martensite-austenite constituent). (Note this is not an actual example, I already had this term by another route, just illustrative.)

Providing leads for further searches: if I want to know what the orders correspond to, geographically and historically, on the John Company map, the obvious best source would be Cole Wehrle, but he’s a busy man and unlikely to answer my questions. No-one else I know has a clue. Digging up the necessary documents to try and piece it all together seems like a lot of work for such low-value data. So the genAI collates a lot of information I don’t have easy access to, makes some horrendous errors, but gives me leads to start looking for better answers.

And soon, apparently: doing 95% of my translation. Not looking forward to this one, but that’s the direction the company is pushing for.

3 Likes

My company currently has a very limited approved use case for AI. Mostly because of the concern around sharing proprietary and secret data with these services.

I mean, we’re a datacenter company and we’ve built out several datacenter suites/floors for AI/ML workloads… But it’s more profitable to sell that space/power to tenant customers than consume it ourselves

3 Likes

Paper reference here, ā€œtried to buryā€ is perhaps a bit strong but they did a minimal publication with no fanfare at all, unlike everything else with ā€œAIā€ in it.

2 Likes

I will not be surprised if most of the existing ai tech disappears. It’s incredibly expensive, and it doesn’t work. No one is going to pay what it actually costs to run the stuff when the VC money runs out.

LLM are a dead end, and they’re garbage, and they’re getting worse. More expensive to train, more expensive to run, and the results are getting worse. AI slop is really fucking them up. They ingest garbage from the previous models, and it reduces the quality of the next one. Not just the straight garbage in, garbage out problems, but also reducing the breadth of knowledge – LLM are just fancy autocomplete, and things that occur rarely in the data are less likely to be in the output, so the new models don’t ever see it. That also creates feedback that makes them worse.

4 Likes

It appears as though the legal system is catching up with LLMs as well: once they have to start paying to train on copyrighted materials, if they even are allowed, that suddenly changes the numbers wildly against LLM, even more than they already are.

2 Likes

My new hobby: watching AI slowly drive Microsoft employees insane thread at reddit.

3 Likes

you mean the concept or the current LLMs and their companies ? because I still think the whole idea is going to stay and keep being developed.

if chatgpt goes away, well then it was hotbot or altavista and not google. indeed it is quite possible that the first crop of these new things will do what the first search engines did: vanish.

but we have now seen that LLMs can be quite capable of some things even if less than the the hype cycle suggests. and I think that will lead to further research and improvement.

but just like cars still need drivers despite everything some techbros tried to tell us, most tasks still need humans at the helm. anyone trying to let LLMs work unattended will get some nasty surprises.

right now, i take it as an improved search engine any day over the enshittified google results or wading through stackoverflow answers .

3 Likes

It is fascinating contrasting this to the ā€œyou will soon be obsoleteā€ conversations going on elsewhere.

2 Likes

I can see LLMs thrive in closed systems. I would love to have a company AI that holds collective knowledge and pretty much tells me what this thing is because I never touch this feature before.

Thats something ChatGPT cant do as these are closed corporate information

3 Likes

Today I learned about a patent application for a process of gamifying the process of generating valid nonce values for blockchains, because doing so uses a lot of processing power. Yes, that’s right, blockchain mining is too resource-intensive, so they are trying to get humans to do it now…

(Incidentally, this was also the first time I ever heard ā€œnonceā€ to mean anything other than ā€œpedophileā€)

6 Likes

I’ve never heard nonce in the context of paedophilia, but have been annoyed a long time that nonce means a throwaway, one-time-use value.

I would much prefer it be the zero equivalent for ā€œonceā€. This would, of course, involve changing the pronunciation.

Q: ā€œHow many times have you won?ā€
Respondent #1: ā€œOnceā€
Respondent #2: ā€œNonceā€

6 Likes