Rix Groenboom is lector New Business & ICT at Hanze University of Applied Sciences in Groningen.

Opinion

Programming like it’s 1984

Leestijd: 3 minuten

Writing code with ChatGPT-like command prompts sends Rix Groenboom back to his Basic days.

Over the Christmas holidays, I paid a visit to the archive of Bytes Magazine. Over 22 years of microcomputer development, starting in September 1975 with the headline: “Computers, the world’s greatest toy.” Some volumes were over 300 pages long, full of advertisements and ‘online’ shopping options using postal orders.

Older readers will remember hopping on board this new technology, invoking warm memories. Home computers like the Commodore 64 and th ZX 80/81/Spectrum and manufacturers such as Amstrad, Philips, Sony and Schneider marketing their own machines – even the BBC did so. They were mostly equipped with assembler (for the Z80 of the MOS 6510 processors) and of course Basic.

Manufacturers trading blows about which architecture or DIY was best reminds me of today’s discussion on generative AI and the different large language models (LLMs) that compete in internet rankings. However, there’s a deeper analogy.

During the recent aiGrunn conference, there were a number of interesting talks on using LLMs for writing code, in particular through the ChatGPT API. Those talks sent me back to my days of programming in Basic. There were pitches about how to divide a task into different subtasks. Having ‘loops’ to iterate over different prompts depending on the output. Handling the maximum context size, resembling the memory management in home computers. My 464 had 64 MB of RAM, of which 42 was available. If you were running low, you could steal (using peek and poke instructions) from the 12k graphical memory as well.

ChatGPT shows us the ‘command prompt’ similar to the inviting ‘Ready’ of the Basic interpreter. The API provides us with ways to directly interact with the model and use a higher ‘programming language’ to give instructions to the machine. The LLMs are becoming an OS by themselves. You could even say that trying to solve a problem in one prompt goes back to the competition of writing a computer program in Basic in one line of code with max 1,024 tokens, such as in the Dutch science magazine Kijk.

With their current non-deterministic behavior, LLMs do differ from the known Von Neumann-based architectures. Interestingly, the reasoning about non-deterministic algorithms has been known since the 80s as well. Formalisms like Communicating Sequential Processes could become useful again to develop an ‘LLM science’ – which reinforces my deja vue as I’ve studied those frameworks actively.

Another big difference is the huge amount of funding and the speed of the developments. For the home computer, entrepreneurs and the public supplied the funding. Now, it’s the big tech firms that are running the show, supported by venture capital. For the compute power, we have a full arsenal of cloud providers that can run the GPU cluster for you. And all the neuromorphic computing efforts (such as the work on Cognigron at the University of Groningen) could bring massive breakthroughs in the years ahead of us.

“They promised us flying cars, and we got 140 characters,” Peter Thiel some 10 years ago somewhat cynically observed about the state of innovation (referring to the movie “Back to the future” from 1985). It could well be that we’ve entered the innovation inflection point as mentioned in the hockey stick model this year. Put on your seat belts and enjoy the ride in the modern equivalent of a Delorian.

Related content