To be fair to language developers, languages have removed a whole lot tedious work from writing software, and reuse of existing code has made programmers way more efficient.
Sute, but āvibe codungā is just the latest attemot to say that anyone can be a prigrammer.
X injection is always the thing with the hot new stuff somehow.
I have used the assistants quite a bit. It is fascinating to see what they can do but they do not always end up saving me time.
And as @RogerBW states below it is not easy to specify what code should be written in the first place. I keep citing the example we had right at the start when my partner wanted to use chatGPT to write a script. He has even studied comp.science himself and does know programming basics (his career took him in another technical direction without programming). But he was unable to get the chatbot to write the program I could easily coax from it by prompting.
My local colleagues tend to agree that experts can get a lot of mileage out of these things. And I feel like my knowledge or my ability to gain it quickly has grown immensely.
We discussed that despite seemingly being the opposite and enabling anyone to write code, instead it makes it harder for beginners. How can you gain the actual expertise from machines who try to tell you that you donāt need it? It will probably develop into yet another meta-skill (because going away this tech is not) that needs to be mastered.
Actually, humble has for the first time in ages a tech bundle I might actually buy. Usually I look through the titltes and think āuseless, useless, know already, useless, know already, useless, have a better source, useless, ā¦ā This one about machine learning seems a bit more useful to me somehow.
Yeah itās a weird time and difficult to predict where these tools end up fitting into peopleās workflows most commonly. I tried some copilot bits but turned off the autocomplete as I found it super distracting when Iām trying to write something and it interrupts my thoughts with suggestions. But Iāve got friends who use it a lot to describe what they want and get a lot of output quickly that they can review rather than write.
Will be interesting to see how it affects younger people who have access to it from the start. Seems like a huge disruption to how learning is being done (or avoided)
This!
I am already seeing a lack of appreciation for the actual acquisition of knowledge. I can even see it in myself āI donāt need to know, I can google that.ā
LLMs are the next step.
But I grew up and went to school before these tools happened.
My cousin who is a teacher already had to adapt to kids having access to google/wikipedia at all times. He has changed his history exams accordingly to be based on understanding much more than knowing some numbers.
ChatGPT will allow to simulate understanding.
I donāt really want to sound old and grumpy. But I probably do when I tell our friendsā teenagers how important it is that they learn to acquire knowledge and train their brains accordingly.
I can get a lot out of chatGPT because I am an expert in my field. They can use LLMs on their way to becoming experts themselves of course but they still have to put in some effort. Their effort will differ from mine.
What can I tell them to make them see the need to become good enough at their āstuffā to be able to tell when the LLM makes a mistake? And what type of efforts will/should they prioritize?
Reviewing code is harder than writing it.
Ad itās already been shown that trusting LLM output inhibits critical thinking skills. (In a Microsoft study that they then tried to bury.)
Do you have a link to the tried to bury it study? Or a reference? Iād be interested in reading more about that.
My problem with āAIā stuff (other than ethical problems regarding environmental issues, plagiarism, capitalism in general) is that itās just regurgitating existing information, but in a less useful way.
Thereās no learning journey. Thereās no surrounding context (documentation, discussion, etc) that will clarify anything you donāt understand.
The only AI application Iāve had be worth a darn is generating a flowchart of a process
Because I asked twelve humans to do it first and they all refused telling me it was too hard
Things I use genAI for:
Providing context for industry jargon: So, for example, I may not be sure how to correctly translate a Japanese phrase that reads like āislands of austenite in martensiteā, the machine translation isnāt great, and I donāt really know what to search for in English: itās likely that providing some background to the LLM will result in some in-context text that tells me the correct term (martensite-austenite constituent). (Note this is not an actual example, I already had this term by another route, just illustrative.)
Providing leads for further searches: if I want to know what the orders correspond to, geographically and historically, on the John Company map, the obvious best source would be Cole Wehrle, but heās a busy man and unlikely to answer my questions. No-one else I know has a clue. Digging up the necessary documents to try and piece it all together seems like a lot of work for such low-value data. So the genAI collates a lot of information I donāt have easy access to, makes some horrendous errors, but gives me leads to start looking for better answers.
And soon, apparently: doing 95% of my translation. Not looking forward to this one, but thatās the direction the company is pushing for.
My company currently has a very limited approved use case for AI. Mostly because of the concern around sharing proprietary and secret data with these services.
I mean, weāre a datacenter company and weāve built out several datacenter suites/floors for AI/ML workloads⦠But itās more profitable to sell that space/power to tenant customers than consume it ourselves
Paper reference here, ātried to buryā is perhaps a bit strong but they did a minimal publication with no fanfare at all, unlike everything else with āAIā in it.
I will not be surprised if most of the existing ai tech disappears. Itās incredibly expensive, and it doesnāt work. No one is going to pay what it actually costs to run the stuff when the VC money runs out.
LLM are a dead end, and theyāre garbage, and theyāre getting worse. More expensive to train, more expensive to run, and the results are getting worse. AI slop is really fucking them up. They ingest garbage from the previous models, and it reduces the quality of the next one. Not just the straight garbage in, garbage out problems, but also reducing the breadth of knowledge ā LLM are just fancy autocomplete, and things that occur rarely in the data are less likely to be in the output, so the new models donāt ever see it. That also creates feedback that makes them worse.
It appears as though the legal system is catching up with LLMs as well: once they have to start paying to train on copyrighted materials, if they even are allowed, that suddenly changes the numbers wildly against LLM, even more than they already are.
My new hobby: watching AI slowly drive Microsoft employees insane thread at reddit.
you mean the concept or the current LLMs and their companies ? because I still think the whole idea is going to stay and keep being developed.
if chatgpt goes away, well then it was hotbot or altavista and not google. indeed it is quite possible that the first crop of these new things will do what the first search engines did: vanish.
but we have now seen that LLMs can be quite capable of some things even if less than the the hype cycle suggests. and I think that will lead to further research and improvement.
but just like cars still need drivers despite everything some techbros tried to tell us, most tasks still need humans at the helm. anyone trying to let LLMs work unattended will get some nasty surprises.
right now, i take it as an improved search engine any day over the enshittified google results or wading through stackoverflow answers .
It is fascinating contrasting this to the āyou will soon be obsoleteā conversations going on elsewhere.
I can see LLMs thrive in closed systems. I would love to have a company AI that holds collective knowledge and pretty much tells me what this thing is because I never touch this feature before.
Thats something ChatGPT cant do as these are closed corporate information
Today I learned about a patent application for a process of gamifying the process of generating valid nonce values for blockchains, because doing so uses a lot of processing power. Yes, thatās right, blockchain mining is too resource-intensive, so they are trying to get humans to do it nowā¦
(Incidentally, this was also the first time I ever heard ānonceā to mean anything other than āpedophileā)
Iāve never heard nonce in the context of paedophilia, but have been annoyed a long time that nonce means a throwaway, one-time-use value.
I would much prefer it be the zero equivalent for āonceā. This would, of course, involve changing the pronunciation.
Q: āHow many times have you won?ā
Respondent #1: āOnceā
Respondent #2: āNonceā