Using LLMs to write self-help style books like this one is a slam dunk use case, honestly. This book was dry but full of anecdotes and listicles --- the perfect slate for some big name needing a book deal.
I'm sure I heard that something like this existed for the JVM ages ago (like 15 years). I don't remember the details so it might not be quite the same, but a colleague was telling me about some tech which would test your concurrency by automatically selecting bad scheduling orders.
Somehow we ended up doing the opposite thing of compression (and I do not mean decompression): user writes three word email, AI expands it to several paragraphs of useless text, sends it to recipient. There another AI digests all that text to some other three words, loosing original meaning and not only wasting CPU on AI tasks, but wasting network bandwidth by sending a lot of useless data. I hate 21st century.
Depends a lot on the investor expectations. If they think the opportunity for growth and market expansion is coming to its end, then they would force for higher returns.
Also Zen4C has AVX512 support while being only ~35% bigger than Gracemont (although TSMC node advantage means you should possibly add another 10% or so). This isn't really a fair comparison because Zen4c is a very differently optimized core than Intel's E cores, but I do think it shows that AVX-512 can be implemented with a reasonable footprint.
Or if Intel really didn't want to do that, they needed to get AVX-10 ready for 2020 rather than going back and forth on it fore ~8 years.
> One place where AI is being disruptive is research: where researchers are applying models in novel ways and making legitimate advances in math, medicine and other fields.
Great point. The perfect example: (From Wiki):
> In 2024, Hassabis and John M. Jumper were jointly awarded the Nobel Prize in Chemistry for their AI research contributions for protein structure prediction.
AFAIK: They are talking about DeepMind AlphaFold.
Related: (Also from Wiki):
> Isomorphic Labs Limited is a London-based company which uses artificial intelligence for drug discovery. Isomorphic Labs was founded by Demis Hassabis, who is the CEO.
Async, green threads, etc... can be useful tools but are never the out of the box solution for not having to think about concurrency and program flow. Even if people keep thinking they are. Learn how to reason about concurrency, there is no way to avoid it.
Super helpful assembly of lots of data on point - a bit too much to digest quickly. Mary Meeker is great.
Some limitations:
- Unhelpful modernity-scale trend/hype-lines. Everyone knows prospects are big and real.
- No significant coverage of robotics, factory automation? (TAM for physical products is 15X search+streaming+saas)
- No insight? No new categories, surprising predictions, critical technologies identified?
Surprises:
- AI productivity improvements are marginal, esp. relative to the concern over jobs
- US ratio of top public companies by market cap increased from ~50% in 1995 to ~85% in 2025. Seems big; or is it an artifact of demographics of retirement investments? Or is it less significant due to growing private capital markets?
What I would like addressed: The AI means of production seem very capital-intensive, even as marginal cost of consumption is Saas-scalable (i.e., big producers, small consumers). I have some concern that AI development directions are decided in relatively few companies (which are biased to Saas over manufacturing, where consumers are closer to producers in size). This increases the likelihood of a generational whiff (a mistake I suspect China won't make).
As an aside, I wish Elon Musk would pivot xAI out of Saas AI (and science AI), focusing exclusively on manufacturing robotics -- dogfooding at Tesla, SpaceX and even Boring -- with the simpler autonomy of controlled environments but the hard problem of not custom building everything every time. They're well positioned, he could learn some discipline from working with downstream and upstream partners on par (instead of slavish employees, fan investors, and dull consumers or slow governments as customers). He'd redeem himself as a builder of stuff that builds, so we can make infrastructure for generations to come.
Anecdotal, but AI was what enabled me to learn French, when I was doing that. Before LLMs, I would've had to pay a lot more money to get the class time I'd need, but the availability of Google Translate and DeepL meant that some meaningful, casual learning was within reach. I could reasonably study, try to figure things out, and have questions for the teachers the two or three times a week I had lessons.
Nowadays I'm learning my parents' tongue (Cantonese) and Mandarin. It's just comical how badly the LLMs do sometimes. I swear they roll a natural 1 on a d20 and then just randomly drop a phrase. Or at least that's my head canon. They're just playing DnD on the side.
What do you mean by "purely with logic, no guessing"?
"Guess and backtrack" is a totally valid form of deduction for pen-and-paper puzzles, it's just not very satisfying. But often (always?) there is a satisfying deduction technique that could have replaced the guess-and-check, it may just be fairly obtuse.
Or do you just mean where the clues for the raster don't result in a unique solution?
We all know why. And we all know why they're still doing the weird Covid-era pre-recorded keynotes, unlike almost all tech companies. They're afraid of (mocking) laughter, booing, and jeering, from devs in the crowd that they keep treating like dirt.
The childish retribution for Gruber's mild criticism is just the cherry on top, but it might have happened regardless.
You never know when you start out a company. Maybe you'll never get any profit and it doesn't matter. Maybe you lose out on a few millions out of billions... and maybe it still doesn't matter?
De Bruijn indices are great, I use them to construct what I call the "De Bruijn Abstraction Algebra"; it comes in two forms, one is used to give a characterisation of alpha equivalence for abstraction algebra, the other one is used to prove completeness of abstraction logic. It is described in [1], and I have described and proven correct everything in excruciating detail. That makes it quite a tough read, although in the end, it is elementary and simple.
but that’s precisely the point and it does give insight. Google scaled off of existing infrastructure like computers. Computers scale off of existing infrastructure like electricity.
The point is to compare current era of scaling to the previous era and see how much faster it is.
It’s not comparing Google to Open Ai. It’s comparing the environment that produced Google to the environment that produced Open Ai.
> I'd be prepared to argue that most humans aren't guessing most of the time.
Research suggests otherwise[1]. Action seems largely based on intuition or other non-verbal processes in the brain with rationalization happening post-hoc.
There are exceptionally few less democratic countries that are functional in a manner such that they can take great advantage of the AI potential, much less execute in some sort of super fast manner compared to the democratic nations. You can count those less democratic nations on one hand.
It's overwhelmingly the case that affluence and national wealth goes hand in hand with greater democracy, there is a tight correlation (and of course there are exceptions). All you need to do is look at the top ~50 nations in terms of GDP per capita or median wealth per adult, then look at the bottom 50.
Less democratic nations will be left even further behind, as the richer democratic nations race ahead as they have been doing for most of the post WW2 era. The richer democratic nations will have the resources to make the required enormous investments. The more malevolent less democratic nations will of course make use of good-enough AI to do malicious things, not much about that will change. Their power position won't fundamentally change however.
In my mind, a well-formed nonogram is one that requires no backtracking. It's an interesting question though. I'll write some code in the next few days to check to see if my set of "unsolvable" puzzles include those with unique solutions given the clues.
Yeah, a "jump to unsolved" seems like its going to be essential. I'll work on that. I haven't heard of the scrolling issue. What device/browser are you using?