These changes in direction (spending billions, freezing hiring) over just a few months show that these people are as clueless as to what's going to happen with AI, as everyone else. They just have the billions and therefore dictate where money goes, but that's it.
This is a structural problem with our economy, much larger than just Facebook. Due its large scale concentration, the allocation of capital in the economy as a whole has become far less efficient over the last 20 years.
Many of us in the antitrust/competition law community are trying. One issue, specific to digital markets, is that the field has very few people who are both legally and technically literate. If you're a technical person looking for a career shift, moving into legal policy/academia has the potential to be quite high impact for that reason.
Gods I would love to work more in a policy space, tho my background is entirely technical.
A friend of mine has been trying to get into law school for a few years; she's technically competent and plenty intelligent, but it's been hard going for her to get in, plus multiple years of education to even attempt the bar. All of that sounds like far too much sunk-cost to me to dally in and figure out if it's a path I would truly enjoy.
What ways could I engage with policy coming from a technical background that would serve as a useful stepping stone to a more policy based career, but doesn't require such an upfront cost as a law degree?
I guess it depends on your circumstances. In Europe, for instance, the cost of a degree is sometimes quite low. My gateway from tech to law was a part-time masters degree in political science, and which cost around 200 euros a semester (in Germany). That degree gave me enough experience to then apply for a PhD in law.
Which brings me to the next point. Doing a law degree and passing the bar is perhaps the obvious path to doing policy things. It’s basically the only way that you can end up actively participating in courts, for example. But there are many other options! For myself, the plan is to stay in academia and not take any bar courses (then again, who knows what will happen!). Academics have lots of potential to shift policy, especially as neutral agents who aren’t paid by either side of particular debates. Our papers are read by policymakers and judges, who often don’t have the time or resources to think deeply about particularly gnarly topics. But there are lots of other options which could also work, and I guess finding a "niche" would depend on your specific circumstances, connections and skillset.
If you’re looking to spend more time thinking about policy issues, I’d start by simply sleuthing online. Bruce Schneier, for example, regularly writes excellent pieces at the intersection of technology and policy, which are very well hyperlinked to other high quality stuff. These kinds of blogs are a great way to get into the space, as well as to learn about opportunities which are coming up. Reading journal articles that sound interesting is a good option too (and US law journal articles are often quite accessible). There are also spaces offline, such as conferences which encourage both law and tech people (there’s one happening in Brussels soon [1]), or even institutions set up specifically to operate in this space and which have in-person events (Newspeak House comes to mind [2]).
Call up TechCongress and offer to volunteer for a cycle.
Law school is the same as med school: if you can’t see yourself living life as something that requires a JD, skip it. Just do the thing you want to do; unless that’s “dispense legal advoce to paying clients and represent them in legal disputes” you can probably do it legally without a JD.
Also be aware you are a lawyer when you graduate law school and you don’t have to pass the bar unless that’s a requirement for your practice. For example, a general counsel of an internet startup might not have to be a member of the bar, but someone going into trial court to represent clients does. I would think you could be a staffer for a congressperson with a JD and without bar membership prettt easily.
Once, out of curiosity, I looked into how easily someone without a formal law degree and work experience could take the bar exam "for fun", and IIRC in my state it wasn't really possible.
World would benefit greatly if EU went ahead with tech tax. It's crazy how much IT companies get away with that would be the end of any other business.
CEO's were never credible to begin with, but what about Nobel laureate in Physics, Geoffrey Hinton, telling us to stop training radiologists? Nothing makes sense anymore.
Well, clearly he should stick to physics. Even if it's likely that AI would replace them soon, a lot of people would likely die unnecessarily if we ran overly low on radiologists by ending the pipeline too soon. They're already overworked and that can only go so far. It's not a bet we should make.
Social Media is basically what enabled me to actually be social during the Friendster/Early Myspace era. It helped me get to know people I'd met in real life, and meet other people within the city I lived in.
Now if you're not on linkedIn, people question whether you are a real person or not.
I hope AI ends up like blockchain. It's there if you have a use-case for it, but it's not absolutely embedded in everything you do. Both are insanely cool technologies.
The media was saying nfts are a reasonable investment and web3 is the future, so I am not sure if they have any remaining credibility.
We are at the awesome moment in history when the AI bubble is popping so I am looking forward to a lot of journalists eating their words (not that anybody is keeping track but they are wrong most of the time) and a lot of LLM companies going under and the domino crash of the stocks of Meta, OpenAI to AWS, Google and Microsoft to Softbank (the same guys giving money to Adam Neumann from WeWork).
I personally have just 20% of my net worth in stocks, as they seem very expensive right now. A crash would allow me to increase my allocation at reasonable prices.
I suppose if you’re operating on the assumption that tech stocks are vastly overinflated then this makes sense. Otherwise I would expect the people that are regularly buying these securities would be happy that they’re increasing in value, no?
The ponzu scheme of SPY is great until it stops. 10% of America’s payroll gets lumped into it each month and generational wisdom is you get a 10% ROI despite the economy growing 2%.
At some point that will collapse, and it won’t be pretty.
Valuation of companies tied to their real current profits! If a company is unprofitable now, it doesn't make sense and is wholly wrong that its stock is trading 1000x more than other companies which actually turn a profit.
The difference from public ownership to public gambling is huge in its impact to society, especially when the markets crashes.
So rather that someone auto-investing a slice of their paycheck into a s&p fund in their 401k, they should instead learn how to evaluate company financials so they can pick winners from a non tax advantaged account?
This is a losing strategy for the large majority, and it's been demonstrated repeatedly that even professional investors can't beat the market especially after considering fees.
No. The 'goal' of investing (e.g. regularly buying) means attempting to own as many shares as possible. That is achieved by buying low and selling high. Buyers benefit from lower prices, not higher.
So many investors get this concept wrong. I suppose they get excited because what they bought went up in value and they have a sense of being enriched. But, that is backwards. That is what they want 20-40 years from now when it will almost certainly be the case that prices are not just higher, but much higher, than today. But, when they are buying shares, the goal is to pay the lowest price possible. If I am 20 years old, I am screaming: crash and burn baby! Crash and burn! Gimme those shares at 50% off yesterday's price.
> I am screaming: crash and burn baby! Crash and burn! Gimme those shares at 50% off yesterday's price.
Sure, but once you reach the point where you have a lot of money in the market you probably won't enjoy watching 50% of it disappear, even if it means your next auto investment is for a nice bargain price.
Also, when the stock market crashes usually bad things accompany it. Like a depressed economy and job losses.
> Also, when the stock market crashes usually bad things accompany it. Like a depressed economy and job losses.
It's our own fault for tying the stock market performance to our economy's performance. Why would I, a train worker, should have my pension affected by Sam a
Altman's bad decision making or by Enron's lies and deception.
It's our own fault that the stock market is so volatile and that we tie so much of our economy to a financial gambling machine that's become increasingly divorced from reality in the last couple of decades. Like you are putting money on a stock that trades at 1000 on a company that is 10 years away from being profitable? You deserve your money to go poof.
> Like you are putting money on a stock that trades at 1000 on a company that is 10 years away from being profitable? You deserve your money to go poof.
Who is suggesting that?
NVDA trades at 57x earnings, MSFT 37, GOOG 22. The article is about META and they are 27x. These are the big companies that dominate the s&p that we're talking about.
I don't think anyone is suggesting to put their life savings into Anthropic. They can't anyway, it's not public.
The s&p PE is 30, which is high, but still lower than it was in 2020 before the AI "bubble" started.
With stock prices divorced from reality, the ones who benefit are the having the funds to buy in volume, the gamblers, and the ones hyping the stocks and creating the illusion of profitability and growth. Years ago it would have been unthinkable to have so many unprofitable companies with unclear path to profitablility having such a high valuation, but we have normalized frenzied gambling as a society.
The current absolute balloon of a market is about to pop, and sadly, the people who hyped the stocks are also the ones knowing when to jump ship, while the hapless schmucks who believed the hype will most likely lose their money, along with a lot of folks whose retirement investment funds either didn't due their diligence or were outright greedy.
In a way, as a society we deserve this upcoming crash, because we allow charlatans and con people like Musk, Zuck and Sam to sell us snake oil.
That's one interpretation, but nobody really knows. It's also possible that they got a bunch of big egos in a room and decided they didn't need any more until they figured out how to organize things.
Especially when there are hours of public footage of the decision maker in question not sounding particular astute as a technical or strategic thinker.
That was my point, if someone thinks that Meta is overvalued, they can put their money where their mouth is. The fact that the share price hasn’t cratered is a kind of collective belief in the opposite.
Edgy prediction: Meta is irrelevant and on a path to even further irrelevancy, and, fingers crossed, a bankruptcy or at least Zuck being removed as the main man
Edgier prediction: Meta is too (big, but more importantly) relevant to (be allowed to) fail, because it can be co-opted by TLAs due to its apps being pre-loaded on mobile devices.
It will be chaos for those working at Meta and those invested in them too much without an appropriate hedge. I doubt I will notice much, to be honest.
Even if Meta tanked, unless Messenger/Whatsapp stop working, it’s kind of beside the point how much their stock trades for. Everyone will just use whatever has or keeps the most public interest, whether that is Meta-owned or something else.
The worrying aspect is that for Meta to really tank in value, the shit has to have already hit the fan, and it probably would not be isolated to Meta.
My point in my prior comment was that Meta serves the purposes of the IC status quo just by doing what they’re already doing. Cloudflare too, in a way.
I meant that if meta actually goes under, since they are clearly a decent part of S&P 500, it might create a spiraling effect similar to lehman brothers which can affect the world economy and thus you and me both.
The problem is that their products are getting worse and worse. Signal is already taking a huge share from WhatsApp (ads and AI chat bots, really?) and Messenger.
TikTok absolutely obliterated Instagram. Facebook is sliding into irrelevancy, and most importantly, they have a lot of failed products like Oculus, Metaverse (wtf is it anyway), LLAMa, etc. Now they are sliding into even more irrelevance and burning money even faster trying to poach extremely expensive OpenAI folks. My conspiracy theory is that Facebook ads earning numbers are somehow a scam.
After so many bad decisions on their part, so much waste and bad execution that I can't see them surviving the next 5 years.
They can just buy other companies for 5 years and coast, or they would have done if not for antitrust concerns under Biden. They can afford to pay hundreds of millions of dollars to rockstars for well over 5 years, and acquihire their way to acquiring the next big thing. I think they’re probably appropriately valued alongside other trillion dollar companies, but they will likely find that it’s less lonely at the top than anticipated.
Signal serves IC interests too by requiring phone numbers.
> They can just buy other companies for 5 years and coast
No, what they could do in the past is not at all how they can operate today. They can't afford to pay the rockstars anymore, they went through multiple rounds of layoffs. They can't afford to drop the stock too low also. Basically they are in a corner, and I love it. Fingers crossed that within the next five years they shake up upper management and Zuck is out.
They’re laying off folks, but that doesn’t mean they’re doing it due to payroll pressure. They’re a megacorp. They can’t go broke from payroll, they have accountants for that. The folks getting acquihired and poached aren’t at risk of being laid off as long as they’re producing value. If the value they bring at Meta is also not being provided elsewhere, so much the better for Meta. It hurts Meta’s competitors more than it hurts Meta, because Meta won’t miss the money. They don’t need high headcount, they need folks who are irreplaceable. It’s a different hiring process for different jobs.
I doubt Zuck is out anytime soon, unless folks stop using their products compared to alternatives. I think it’s possible, but I think the odds are at best even for him to go in 5 years. In 10 years, who can say? Facebook users are pretty locked in because there’s nothing else like it for the users that regularly use it. Facebook users who aren’t on alternatives aren’t just going to switch to Reddit or TikTok overnight. Why would they? I can’t follow your reasoning, but I understand not being a fan of Zuck or Meta, I guess, but I think their business seems pretty strong right now, though that is subject to change along with consumer whims.
I can't/won't short the stock because shorting usually is for 14 days etc. and I can't be certain in that timeframe about meta or any company
I mean, theoretically you could short a company for a really long time it seems like, I just searched, I always assumed it to usually be of 14 days but still.
Isn’t that the reason options even exist? You need to know you won’t starve and die if your harvest doesn’t come in. I’ll admit that I’m no expert on finance and innovations in financial instruments, but I think short selling has been around in some form or another for centuries.
> The practice of short selling was likely invented in 1609 by Dutch businessman Isaac Le Maire, a sizeable shareholder of the Dutch East India Company (Vereenigde Oostindische Compagnie or VOC in Dutch).
Maybe also like adding ads in WhatsApp cause we gotta squeeze our users so we can spend on... AI gurus?
Meta has not had a win since they named themselves Meta. It's enjoyable to watch them flail around like clueless morons jumping on every fad and wasting their time and money.
It makes me feel better / more comfortable with myself seeing what meta does. Almost being childish.
Maybe this sounds selfish but its a little fun to me to see them lose. I just don't like meta and its privacy in sensitive ad network bullshit.
Like the fact that if someone clicked a photo and deleted it then show them beauty ads because they are insecure. I can't give 2 cents about the growth of such a black mirror -esque company
> I can't give 2 cents about the growth of such a black mirror -esque company
I would donate my two cents or even more to witness their downfall though. I left WhatsApp years ago, and haven't used any of their other services like fb or Instagram. I don't want to contribute to a company that actively helped a couple of genocides (Myanmar), help elect a dictator or two (Philippines) and spread racist propaganda and, most recently, allowing women to be called 'personal objects'.
Their tech is far from impressive, their products are far from impressive, the only impressive thing is that they are still in business.
Yes, yes I do. How much practical experience does someone with billions of dollars have with the average person, the average employee, the average job, and the kind of skills and desires that normal people possess? How much does today's society and culture and technology resemble society even just 15 years ago? Being a billionaire allows them to put themselves into their own social and cultural bubble surrounded by sycophants.
I really hope they told the Louisiana regulators this in the meeting yesterday because the argument was something along the lines of “Meta is worth $2T”
At the current price of $107,586 per kilo of gold, that is 731,507 kilos of gold per year. A rail box car has a load limit of 92,500 kilos. Eight full box cars, or 16 half full box cars of gold currently represents the annual output of META.
The financials from the link to not specifically call out Depreciation Expense. But Operating Income should take into account Depreciation Expense.
The financials have a line below Net Income Line called "Reconciled depreciation" with about $16.7 billion. I do not know what that means (maybe this is how they get to the EBITDA metric) but maybe this is the metric you are looking for.
Zuckerberg either doesn't have the resolve for changing the business, or just keeps picking the wrong directions (depending on your biases).
First Facebook tried to pivot into mobile, pushed really hard for a short time and then flopped. Then Facebook tried really hard to make the Metaverse a thing, and for a while, but eventually Meta stopped finding it interesting and significantly reduced investment. Then AI was the big thing and Meta put a huge amount of money into it, chasing after other companies, with an arguably novel approach compared to the rest of big tech... but now seems to be backing out or at least messaging less commitment. Oh and I think there was some crypto in there too at one point?
I'm not saying that they should have stuck with any of these. The business may not have worked in each case, and that's fine, but spending billions on each one seems like a bad idea. Zuckerberg is great at chasing the next big thing, but seemingly bad at landing the next big thing. He either needs to chase them more tentatively, investing far less, or he needs to stick with them long enough to work out all the issues and build the growth over the long term.
For the past 15 years, mobile has been the main revenue source for Facebook. As big as Facebook is, they're at the mercy of the 2 competitors: Apple and Google. Apple has been very hostile to Facebook, because Facebook make a shitload of money off Apple's platform and they refused to pay a certain percentage to Apple - unlike Google who is paying 20B a year to access iOS users. Apple tried to cut Facebook off with ATT on iOS 14, but it didn't work.
Because of this, Zuckerberg has to be incredibly paranoid about controlling his company destiny, to stop relying on others' platforms to deliver ads. It would be catastrophic for Facebook to not be a main player for the next computing platform, and they're currently making a lot of money from their other businesses. Zuckerberg is ruthless and he is paranoid, he has total control of Facebook and he will use all the resources to control the next big thing. I think it comes down to this: Zuckerberg believes it's cheaper to be wrong than to miss out on the next platform, and Facebook can afford to be wrong (to a certain extend).
> For the past 15 years, mobile has been the main revenue source for Facebook. As big as Facebook is, they're at the mercy of the 2 competitors
Before mobile was this big, Facebook tried their own platform and bottled it. This was during the period that the market was still diverse, with Windows phones, Blackberries, etc.
They also tried to make mobile web a thing for a few years past when it was obvious that native apps were the way forward.
Facebook certainly did not have the resources and experiences to make a mobile OS at that point. Microsoft tried and failed, there was no space for a 3rd mobile OS.
> They also tried to make mobile web a thing for a few years past when it was obvious that native apps were the way forward.
This was one of the first friction Facebook encountered with Apple. They wanted to make their own store in the Facebook app on iOS, but obviously Apple said no. Maybe doing Facebook app in HTML5 was a way to protest against the way Apple was moving things forward, but again it didn't work, their app was crap and they rewrote everything in native.
Don't forget gaming back in the day! Facebook games started taking off, then Facebook decided that the _only_ way you could get paid on the Facebook platform was with Facebook Credits, and to incentivize Facebook as the gaming platform of choice, Facebook would give out free Credits to players to spend on Facebook games. Of course, if your game was the one they chose to spend those Credits on, you wouldn't actually get paid, not with promotional credits, what, are you crazy?
No, I'm not still bitter from that era, why do you ask?
Cory Doctorow has a compelling theory that the megatech companies have to appear to be startups, or else their share price reverts to normal multiples. Hence the continuous string of increasingly over-hyped "game-changing technologies" they all (not just Meta) keep rolling out.
VR, blockchain and LLMs have their value, but it's a tiny fraction of the insane amounts of money being pumped into these bubbles. There will be tears before bedtime.
Indeed, for big valley tech companies it's crucial to have a new business developing in the wings which has plausible potential to be the "next big thing." They're desperate to keep their stock price from being evaluated solely on trailing twelve month revenue, so having a shiny, ephemeral hype-magnet to attract inflated growth expectations is essential.
So far, it appears the psychology of investors allows the new thing to fail to deliver big revenue and be tacitly dropped - as long as there's a new new thing to replace it as the aspirational vehicle. Like any good mark in a con game, tech investors want to believe.
> as long as there's a new new thing to replace it as the aspirational vehicle. Like any good mark in a con game, tech investors want to believe.
Yea, but it seems like the new new thing needs to get progressively bigger with each cycle, which is why I think the shell game is almost over.
They really can't overpromise much more than they did with the AI hype-cycle.
It feels like a startup valuation in that having a down round is...not favored by investors; I feel like having a step-down in promises would also be problematic.
> They really can't overpromise much more than they did with the AI hype-cycle.
While I agree that "replace all human labor" is already pretty high up there on the overreaching u/dis-topian promise list, there are still a few things left.
Perhaps the next dream to sell will be digitizing the minds of Valued Shareholders so that they can grasp immortality inside the computer.
Yup which is why I think that the bubble is going to burst but what's surprising to me is that a lot of normal folks might be hurted by this too in the sense that s&p 500 has really concentrated its holding into companies believing into AI and the hype train seems to be coming to an end and the bubble coming near its burst.
Or every investor just expects the other investors will fall for this, but the result is the same: number go up so buy more. It could be no one really falls for it at all.
The economy could be doing really bad but the stock market can be doing good, they aren't direct correlation imo anymore.
It can take a long time for the stock market to actually be corrected but I know one thing and that is, bottom it would be corrected some day and maybe they would call it the bursting of a bubble
> Cory Doctorow has a compelling theory that the megatech companies have to appear to be startups, or else their share price reverts to normal multiples.
This may well be true, but my point is more that Facebook/Meta/Zuckerberg seem almost uniquely unable to turn the startups into great new businesses, when compared with the other big tech companies.
Amazon added cloud and prime, Microsoft added cloud, xbox, 365, Google added Chrome, Android, cloud, Youtube, consumer subscriptions, workspace, etc. Netflix added streaming and their own content, Apple added mobile, wearables, subscriptions.
Meta though, they've got an abandoned phone platform from years ago, a half-baked Metaverse that is being defunded, a small hardware business for the Quest, a pro VR headset that got defunded, a crypto business that got deprioritised, and an LLM that's expensive relative to open competitors and underperforms relative to closed competitors... which the tide appears to be turning on as the AI bubble reaches popping point.
> Facebook/Meta/Zuckerberg seem almost uniquely unable to turn the startups into great new businesses, when compared with the other big tech companies.
Really? Instagram, WhatsApp... the two most used apps & services in the world?
> Google added Chrome, Android, cloud, Youtube,
It's arguable how GCP is profitable, but chrome/android/yt are money-losing businesses if you exclude ad revenues.
Maybe he can work on making Facebook not be such a piece of shit. I feel like he got his one lucky break and should just give up on trying to make more money. He already has billions. Is he proud of Facebook as a product? Because as a user it feels sluggish, buggy, inconsistent, and just full of low quality trash. I would be embarrassed if I was him.
Metaverse was a flop maybe, but meta makes something like $1 billion a week from its mobile apps, it'd be crazy to say that is not successful.
The fact that it was so successful, and that zuck picked mobile to be the next big thing before many of his peers and against what managers in the company wanted to do is probably what has made him now overconfident that he can do it again
No. Back when smartphones were still in the process of taking over the market, Zuck saw the adoption curve and realized that future ad revenue would be from phone scrollers.
At the time most features were designed and implemented first for desktop and later ported to mobile. He issued an edict to all hands: design and build for mobile first. Not at some point in the future but for everything, starting immediately.
Maybe this doesn't sound major, but for the company it was a turn on a dime, and the pivot was both well informed and highly successful in practice.
Call me classic and old school. But I call a company succuessfull if it actually does make more money than it spends. Everything else is just driving dept and economy but no actual success
>Then Facebook tried really hard to make the Metaverse a thing, and for a while, but eventually Meta stopped finding it interesting and significantly reduced investment.
That's a charitable description of a massive bonfire of cash and credibility for an end product that looks worse than a 1990s MMORPG and has fewer active users than a small town sports arena.
Compared to other recent bubbles (crypto, nfts, and ai), its practically quaint and lovable by comparison. About the only person it hurt is mark Zuckerberg and the marketing grifters that tried to start companies around it.
If we wait 20 years for VR to take off and it's not Meta who benefits then, it's them who are short sighted to have started so early on that bandwagon.
Besides, waiting for something to materialize before being able to declare that it is stupid is a cop out. What, are we waiting for NFTs to become useful? They are stupid now. VR is stupid and unsuccessful now. I ain't waiting to be able to declare that Meta screwed up both in VR and in the Metaverse whatever the Metaverse is.
It's important to analyze decisions within the context at the time, not the modern context.
When Facebook went into gaming, it was about the time they went public and they were in search of revenue. At the time, FB games were huge. It was the era of Farmville. Some thought that FB and Zynga would be the new Intel and MIcrosoft. This was also long before mobile gaming was really big so gaming wasn't an unreasonable bet.
Waht really killed FB Gaming was not having a mobile platform. They tried. But they failed. We could live in a very different world if FB partnered with Google (who had Android) but both saw each other as an existential threat.
After this, Zuckerberg paid $1 billion for Instagram. This was a 100x decision, much like Google buying Youtube.
But in the last 5-10 years the company has seemed directionless. FB itself has fallen out of favor. Tiktok came out of nowhere and has really eaten FB's lunch.
The Metaverse was the biggest L. Tends of billions of dollars got thrown at this before any product market fit was found. VR has always been a solution looking for a problem. Companies have focused on how it can benefit them but consumers just don't want headsets strapped to their heads. It's never grown beyond a niche and never shown signs that it would.
This was so disastrous that the company lost like 60%+ of its value and seemingly it's been abandoned now.
Meta also dabbled with cryptocurrencies and NFTs. Also abandoned.
Social media really seems to have settled into a means of following public figures. Individuals generally seem to interact with each other via group texts.
Meta has a massive corpus of posts, comments, interactions, etc to train AI. But what does Meta do with AI? Can they build a moat? It's never been clear to me what the end goal is.
> Meta has a massive corpus of posts, comments, interactions, etc to train AI
I question whether the corpus is of particularly high quality and therefore valuable source data to train on.
On the one hand: 20+ years of posts. In hundreds of languages (very useful to counteract the extreme English-centricity of most AI today).
On the other hand: 15+ years of those posts are clustered on a tiny number of topics, like politics and selling marketplace items. Not very useful unless you are building RagebaitAI I suppose. Reddit's data would seem to be far more valuable on that basis.
> Social media really seems to have settled into a means of following public figures. Individuals generally seem to interact with each other via group texts.
Depends on your interpretation. Maybe? I think there's a fair case that Instagram wouldn't be what it is today if it wasn't bought by Facebook.
You could also level a similar question at Google about YouTube. I believe YouTube is one of Google's great successes (bias: I work at Google), and that it wouldn't have become what it is now outside of Google, but I think it would be hypocritical of me to not accept the same about Instagram.
He, as many other billionaires, confused luck for skill. Just because they were at the right time in the right place to launch something, doesn't mean their other ideas are solid or make sense.
It almost seems as though the record-setting bonuses they were dolling out to hire the top minds in AI might have been a little shortsighted.
A couple of years ago, I asked a financial investment person about AI as a trick question. She did well by recommending investing in companies that invest in AI (like MS) but who had other profitable businesses (like Azure). I was waiting for her to put her foot in her mouth and buy into the hype.She skillfully navigated the question in a way that won my respect.
I personally believe that a lot of investment money is going to evaporate before the market resets. What we're calling AI will continue to have certain uses, but investors will realize that the moonshot being promised is undeliverable and a lot of jobs will disappear. This will hurt the wider industry, and the economy by extension.
I have an overwhelming feeling that what we're trying to do here is "Netflix over DialUp."
We're clearly seeing what AI will eventually be able to do, just like many VOD, smartphone and grocery delivery companies of the 90s did with the internet. The groundwork has been laid, and it's not too hard to see the shape of things to come.
This tech, however, is still far too immature for a lot of use cases. There's enough of it available that things feel like they ought to work, but we aren't quite there yet. It's not quite useless, there's a lot you can do with AI already, but a lot of use cases that are obvious not only in retrospect will only be possible once it matures.
Some people even figured it out in the 80's. Sears founded and ran Prodigy, a large BBS and eventually ISP. They were trying to set themselves up to become Amazon. Not only that, Prodigy's thing (for a while) was using advertising revenue to lower subscription prices.
Your "Netflix over dialup" analogy is more accessible to this readership, but Sears+Prodigy is my favorite example of trying to make the future happen too early. There are countless others.
Amazing how far that company has fallen; they were sort of a force to be reckoned with in the 70's and 80's with Craftsman and Allstate and Discover and Kenmore and a bunch of other things, and now they're basically dead as far as I can tell.
On the topic of how Sears used to be high-tech: back in 1981, when IBM introduced the IBM PC, it was the first time that they needed to sell computers through retail. So they partnered with Sears, along with the Computerland chain of computer stores, since Sears was considered a reasonable place for a businessperson to buy a computer. To plan this, meetings were held at the Sears Tower, which was the world's tallest building at the time.
My favorite anecdote about Sears is from Starbucks current HQs - the HQs used to be a warehouse for Sears. Before renovation the first floor walls next to the elevators used to have Sears' "commitment to customers" (or something like that).
To me it read like it was written by Amazon decades earlier. Something about how Sears promises that customers will be 100% satisfied with the purchase, and if for whatever reason that is not the case customers can return the purchase back to Sears and Sears will pay for the return transportation charges.
Craftsman tools have almost felt like a life-hack sometimes; their no-questions-asked warranties were just incredible.
My dad broke a Craftsman shovel once that he had owned for four years, took it to Sears, and it was replaced immediately, no questions asked. I broke a socket wrench that I had owned for a year and had the same story.
I haven't tested these warranties since Craftsman was sold to Black and Decker, but when it was still owned by Sears I almost exclusively bought Craftsman tools as a result of their wonderful warranties.
FWIW, I bought a Craftsman 1/4" drive ratchet/socket set at a Lowes Home Improvement store last year, and when I got it home and started messing with it, the ratchet jammed up immediately (before even being used on any actual fastener). I drove back over there the next day and the lady at the service desk took a quick look, said "go get another one off the shelf and come back here." I did, and by the time I got back she'd finished whatever paperwork needed to be done, handed me some $whatever and said "have a nice day."
Maybe not quite as hassle free as in years past, but I found the experience acceptable enough.
I think that's as much about Lowes as it is Craftsman... I don't think Craftsman tools have been particularly well build, just that they had and are able to have enough margins to have a no questions asked policy... it probably helps that a lot of the materials are completely and easily recyclable.
> My dad broke a Craftsman shovel once that he had owned for four years, took it to Sears, and it was replaced immediately, no questions asked. I broke a socket wrench that I had owned for a year and had the same story.
This is covered by consumer protection laws in some places. 4 years on a spade would be pushing it, but I’d try with a good one.
Here in New Zealand it’s called ‘The Consumer Guarantees Act’. We pay more at purchase time, but we do get something for it.
Lots of tools have lifetime warranties. Harbor Freight's swap process is probably fastest, these days, for folks with one nearby. Tekton's process is also painless, but slower: Send them a photo of the broken tool, and they deliver a new tool to your door.
But I'm not old enough to remember a time when lifetime warranties were unusual. In my lifetimes, a warranty on handtools has always seemed more common than not outside of the bottom-most cheese-grade stuff.
I mean: The Lowes house-brand diagonal cutters I bought for my first real job had a lifetime warranty.
And before my time of being aware of the world, JC Penney sold tools with lifetime warranties.
(I remember being at the mall with my dad when he took a JC Penney-branded screwdriver back to JC Penney -- probably 35 years ago.
He got some pushback from people who insisted that they had never sold tools, and then from people who insisted that they never had warranties, and then he finally found the fellow old person who had worked there long enough to know what to do. Without any hesitation at all, she told us to walk over to Sears, buy a similar Craftsman screwdriver, and come back with a receipt.
Prodigy predates ISPs (internet service providers). Before the web had matured a little in 1993 the internet was too technically challenging to interest most consumers except maybe for email, and Prodigy was formed in 1984 -- and although it offered email, it was walled-garden email: a Prodigy user could not exchange email with the internet till the mid-1990s at which time Prodigy might have become an ISP for a few years before going out of business.
At a previous job I worked under a guy who started his own ISP in the early 90’s. I would have loved to have been part of that scene but I was only like four when that happened.
They weren't wrong. Its core business in what is still a viable-enough sector collapsed. Or if it were truly well-managed, running an ISP and a retailer should have been enough insight to be Amazon.
I worked at Sears at the time when Amazon first started becoming a household name. I for the life of me couldn’t understand why they didn’t make a copycat site called the Sears Catalog Online. But then I think about it and management wanted salesmanship because selling maintenance agreements was their cash cow. Low margin sales won in the long term hence we have Walmart and Amazon as the biggest retailers.
Likely standard management failure. Sears got burned badly when it put its catalog online on Prodigy in the 80's, so obviously online sales were doomed to failure.
It wasn't possible for them to be well managed at the time it mattered. Sears was loaded with debt by private equity ghouls; same story for almost all defunct brick and mortar businesses; Amazon was a factor, but private equity is what actually destroyed them.
Thank you for bringing this up. Sears really didn't have a choice, they were a victim of the most pernicious LBO, Gordon Gecko-style strip mining nonsense on the PE spectrum. All private equity is not the same but after seeing two PE deals from the inside (one a leveraged buy out) and another VC one with the "grow at insane place" playbook I think I prefer the naked and aligned greed of the VC model; PE destroyed both of the other companies while the VC one was already doomed.
And, knowing Jeff Bezos' private equity origins, one could be forgiven for entertaining the thought that none of this was an accident. Just don't be an idiot and, you know, give voice to that thought or anything.
Are you suggesting that Jeff Bezos somehow convinced all his PE buddies to tank Sears (and their own loans to it) in order for him to build Amazon with less competition? Because, well, no offense, but that seems like a remarkably naive understanding of capital markets and individual motivations. Especially when it's well documented how Eddie Lampert's libertarian beliefs caused him to run it into the ground.
This is a great example that I hadn't heard of and reminds me of when Nintendo tried to become an ISP when they built the Family Computer Network System in 1988
A16Z once talked about the scars of being too early causes investors/companies to get fixed that an idea will never work. Then some new younger people who never got burned will try the same idea and things will work.
Prodigy and the Faminet probably fall into that bucket along with a lot of early internet companies where they tried things early, got burned and then possibly were too late to capitalise when it was finally the right time for the idea to flourish
Reminds me of Elon not taking a no for an answer. He did it twice, with a massive success.
A true shame to see how he's completely lost track with Tesla, the competition particularly from China is eating them alive. And in space, it's a matter of years until the rest of the world catches up.
And now, he's ran out of tricks - and more importantly, on public support. He can't pivot any more, his entire brand is too toxic to touch.
Lucky for him, the US government is keeping him from being eaten alive in the USA at least.
I remember that one time we tried to drastically limit Japanese imports to protect the American car industry, which basically created the Lexus LS400, one of the best cars ever made.
I dont know, you could argue that maybe GM with the EV1 was the 'too early' EV and Tesla was just at the right moment. Same goes for SpaceX, The idea of a reusable launcher was not a new idea and studied by NASA. I think they did some test vehicles.
SpaceX is an excellent example of this phenomenon. Reusable rockets were "known" to be financially infeasible because the Space Shuttle was so expensive. NASA & oldspace didn't seriously pursue reusable vehicles because the mostly reusable Space Shuttle cost so much more than conventional disposable vehicles.
Similar to how Sears didn't put their catalog online in the 90's because putting it online on Prodigy failed so badly in the 80's.
On the flip side, they didn't actually learn that lesson... that it was a matter of immature tech with relatively limited reach... by the time the mid-90's came through, "the internet is just a fad" was pretty much the sentiment from Sears' leadership...
They literally killed their catalog sales right when they should have been ramping up and putting it online. They could easily have beat out Amazon for everything other than books.
My cousin used to tell me that things works because they were the right thing at the right time. I think he gave the idea of amazon only.
But I guess in startup culture, one has to die trying the idea of right time, as sure one can do surveys to feel like it, but the only way we can ever find if its the right time is the users feedback when its lauched / over time.
They sure did. This reminds me of when I was in the local Mac Dealer right after the iPod came out. The employees were laughing together saying “nobody is going to buy this thing”.
the problem is ISP became a Utility, not some fountain of unlimited growth.
What you're arguing is that AI is fundamentally going to be a utility, and while that's worth a floor of cash, it's not what investors or the market clamor for.
I agree though, it's fundamentally a utility, which means theres more value in proper government authority than private interests.
The product itself determines wether ita a utility, not the business interest. Assuming democracy works correctly. Only a dysfunctional government ignores natural monopolies.
> We're clearly seeing what AI will eventually be able to do
Are we though? Aside from a narrow set of tasks like translation, grammar, and tone-shifting, LLMs are a dead end. Code generation sucks. Agents suck. They still hallucinate. If you wouldn't trust its medical advice without review from an actual doctor, why would you trust its advice on anything else?
Also, the companies trying to "fix" issues with LLMs with more training data will just rediscover the "long-tail" problem... there is an infinite number of new things that need to be put into the dataset, and that's just going to reduce the quality of responses.
For example: the "there are three 'b's in blueberry" problem was caused by so much training data in response to "there are two r's in strawberry". it's a systemic issue. no amount of data will solve it because LLMs will -never- be sentient.
Finally, I'm convinced that any AI company promising they are on the path to General AI should be sued for fraud. LLMs are not it.
I have a feeling that you believe "translation, grammar, and tone-shifting" works but "code generation sucks" for LLMs because you're good at coding and hence you see its flaws, and you're not in the business of doing translation etc.
Pretty sure if you're going to use LLMs for translating anything non-trivial, you'd have to carefully review the outputs, just like if you're using LLMs to write code.
You know, you're right. It -also- sucks at those tasks because on top of the issue you mention, unedited LLM text is identifiable if you get used to its patterns.
Exactly. Books are still being translated by human translators.
I have a text on my computer, the first couple of paragraphs from the Dutch novel "De aanslag", and every few years I feed it to the leading machine translation sites, and invariably, the results are atrocious. Don't get me wrong, the translation is quite understandable, but the text is wooden, and the translation contains 3 or 4 translation blunders.
GPT-5 output for example:
Far, far away in the Second World War, a certain Anton Steenwijk lived with his parents and his brother on the edge of Haarlem.
Along a quay, which ran for a hundred meters beside the water and then, with a gentle curve, turned back into an ordinary street, stood four houses not far apart.
Each surrounded by a garden, with their small balconies, bay windows, and steep roofs, they had the appearance of villas, although they were more small than large; in the upstairs rooms, all the walls slanted.
They stood there with peeling paint and somewhat dilapidated, for even in the thirties little had been done to them.
Each bore a respectable, bourgeois name from more carefree days:
Welgelegen Buitenrust Nooitgedacht Rustenburg
Anton lived in the second house from the left: the one with the thatched roof. It already had that name when his parents rented it shortly before the war; his father had first called it Eleutheria or something like that, but then written in Greek letters. Even before the catastrophe occurred, Anton had not understood the name Buitenrust as the calm of being outside, but rather as something that was outside rest—just as extraordinary does not refer to the ordinary nature of the outside (and still less to living outside in general), but to something that is precisely not ordinary.
Can you provide a reference translation or at least call out the issues you see with this passage? I see "far far away in the [time period]" which I should imagine should be "a long time ago" What are the other issues?
Waar heb je het over? "Welgelegen Buitenrust Nooitgedacht Rustenburg" is volkomen cromulent Engels.
For what it's worth, I do use AI for language learning, though I'm not sure it's the best idea. Primarily for helping translate German news articles into English and making vocabulary flashcards; it's usually clear when the AI has lost the plot and I can correct the translation by hand. Of course, if issues were more subtle then I probably wouldn't catch them ...
Can you expand on this? For tasks with verifiable rewards you can improve with rejection sampling and search (i.e. test time compute). For things like creative writing it’s harder.
For creative writing, you can do the same, you just use human verifiers rather than automatic ones.
LLMs have encountered the entire spectrum of qualities in its training data, from extremely poor writing and sloppy code, to absolute masterpieces. Part of what Reinforcement Learning techniques do is reinforcing the "produce things that are like the masterpieces" behavior while suppressing the "produce low-quality slop" one.
Because there are humans in the loop, this is hard to scale. I suspect that the propensity of LLMs for certain kinds of writing (bullet points, bolded text, conclusion) is a direct result of this. If you have to judge 200 LLM outputs per day, you prize different qualities than when you ask for just 3. "Does this look correct at a glance" is then a much more important quality.
> Aside from a narrow set of tasks like translation, grammar, and tone-shifting, LLMs are a dead end.
I consider myself an LLM skeptic, but gee saying they are a "dead end" seems harsh.
Before LLM's came along computers understanding human language was graveyard academics when to end their careers in. Now computers are better at it and far faster than most humans.
LLM's also have an extortionary ability to distill and compress knowledge, so much so that you can download a model whose since is measured in GB, and it seems to have a pretty good general knowledge of everything of the internet. Again, far better than any human could do. Yes, the compression is lossy, and yes they consequently spout authoritative sounding bullshit on occasion. But I use them regardless as a sounding board, and I can ask them questions in plain English rather than go on a magical keyword hunt.
Merely being able to understand language or having a good memory is not sufficient to code or do a lot else, on it's own. But they are necessary ingredients for many tasks, and consequently it's hard to imagine a AI that can competently code that doesn't have an LLM as a component.
> it's hard to imagine a AI that can competently code that doesn't have an LLM as a component.
That's just it. LLMs are a component, they generate text or images from a higher-level description but are not themselves "intelligent". If you imagine the language center of your brain being replaced with a tiny LLM powered chip, you would not say it's sentient. it translates your thoughts into words which you then choose to speak or not. That's all modulated by consciousness.
> If you wouldn't trust its medical advice without review from an actual doctor, why would you trust its advice on anything else?
When an LLM gives you medical advice, it's right x% of the time. When a doctor gives you medical advice, it's right y% of the time. During the last few years, x has gone from 0 to wherever it is now, while y has mostly stayed constant. It is not unimaginable to me that x might (and notice I said might, not will) cross y at some point in the future.
The real problem with LLM advice is that it is harder to find a "scapegoat" (particularly for legal purposes) when something goes wrong.
"MAI-DxO boosted the diagnostic performance of every model we tested. The best performing setup was MAI-DxO paired with OpenAI’s o3, which correctly solved 85.5% of the NEJM benchmark cases. For comparison, we also evaluated 21 practicing physicians from the US and UK, each with 5-20 years of clinical experience. On the same tasks, these experts achieved a mean accuracy of 20% across completed cases."
Of course, AI "doctors" can't do physical examinations and the best performing models cost thousands to run per case. This is also a test of diagnosis, not of treatment.
If you consider how little time doctors have to look at you (at least in Germanys half broken public health sector) and how little they actually care ...
Or maybe not. Scaling AI will require an exponential increase in compute and processing power, and even the current LLM models take up a lot of resources. We are already at the limit of how small we can scale chips and Moore’s law is already dead.
So newer chips will not be exponentially better but will be more of incremental improvements, so unless the price of electricity comes down exponentially we might never see AGI at a price point that’s cheaper than hiring a human.
Most companies are already running AI models at a loss, scaling the models to be bigger(like GPT 4.5) only makes them more expensive to run.
The reason why internet, smartphones and computers have seen exponential growth from the 90s is due to underlying increase in computing power. I personally used a 50Mhz 486 in the 90s and now use a 8c/16t 5Ghz CPU. I highly doubt if we will see the same form of increase in the next 40 years
Scaling AI will require an exponential increase in compute and processing power,
A small quibble... I'd say that's true only if you accept as an axiom that current approaches to AI are "the" approach and reject the possibility of radical algorithmic advances that completely change the game. For my part, I have a strongly held belief that there is such an algorithmic advancement "out there" waiting to be discovered, that will enable AI at current "intelligence" levels, if not outright Strong AI / AGI, without the absurd demands on computational resources and energy. I can't prove that of course, but I take the existence of the human brain as an existence proof that some kind of machine can provide human level intelligence without needing gigawatts of power and massive datacenters filled with racks of GPU's.
If we suppose that ANNs are more or less accurate models of real neural networks, the reason why they're so inefficient is not algorithmic, but purely architectural. They're just software. We have these huge tables of numbers and we're trying to squeeze them as hard as possible through a relatively small number of multipliers and adders. Meanwhile, a brain can perform a trillion fundamental simultaneously because every neuron is a complete processing element independent of every other one. To bring that back into more concrete terms, if we took an arbitrary model and turned it into a bespoke piece of hardware, it would certainly be at least one or two orders of magnitude faster and more efficient, with the downside that since it's dead silicon it could not be changed and iterated on.
If you account for the fact that biological neurons operate at a much lower frequency than silicon processors, then the raw performance gets much closer. From what I can find, neuron membrane time constant is around 10ms [1], meaning 10 billion neurons could have 1 trillion activations per second, which is in the realm of modern hardware.
People mentioned in [2] have done the calculations from a more informed position than I have, and reach numbers like 10^17 FLOPS when doing a calculation that resembles this one.
the reason why they're so inefficient is not algorithmic, but purely architectural.
I would agree with that, with the caveat that in my mind "the architecture" and "the algorithm" are sort of bound up with each other. That is, one implies the other -- to some extent.
And yes, fair point that building dedicated hardware might just be part of the solution to making something that runs much more efficiently.
The only other thing I would add, is that - relative to what I said in the post above - when I talk about "algorithmic advances" I'm looking at everything as potentially being on the table - including maybe something different from ANN's altogether.
The energy inefficiency of ANNs vs our brain is mostly because our brain operates in async dataflow mode with each neuron mostly consuming energy only when it fires. If a neuron's inputs haven't changed then it doesn't redundantly "recalculate it's output" like an ANN - it just does nothing.
You could certainly implement an async dataflow type design in software, although maybe not as power efficiently as with custom silicon, but individual ANN node throughput performance would suffer given the need to aggregate neurons needing updates into a group to be fed into one the large matrix multiplies that today's hardware is optimized for, although sparse operations are also a possibility. OTOH conceivably one could save enough FLOPs that it'd still be a win in terms of how fast an input could be processed through an entire neural net.
> If we suppose that ANNs are more or less accurate models of real neural networks
i believe the problem is we don't understand actual neurons let alone actual networks of neurons to even know if any model is accurate or not. The AI folks cleverly named their data structures "neuron" and "neural network" to make it seem like we do.
This is a bit of a cynical take. Neural networks have been "a thing" for decades. A quick google suggests 1940s. I won't quibble on the timeline but no-one was trying to trick anyone with the name back then, and it just stuck around.
> If we suppose that ANNs are more or less accurate models of real neural networks [..]
IANNs were inspired by biological neural structures and that's it. They are not representative models at all, even of the "less" variety. Dedicated hardware will certainly help, but no insights into how much it can help will come from this sort of comparison.
Could you explain your claim that ANNs are nothing like real neural networks beyond their initial inspiration (if you'll accept my paraphrasing). I've seen it a few times on HN, and I'm not sure what people mean by it.
By my very limited understanding of neural biology, neurons activate according to inputs that are mostly activations of other neurons. A dot product of weights and inputs (i.e. one part of matrix multiplication) together with a threshold-like function doesn't seem like a horrible way to model this. On the other hand, neurons can get a bit fancier than a linear combination of inputs, and I haven't heard anything about biological systems doing something comparable to backpropogation, but I'd like to know whether we understand enough to say for sure that they don't.
Actually that's not true. Our neocortex - the "crumpled up" outer layer of our brain, which is basically responsible for cognition/intelligence, has a highly regular architecture. If you uncrumpled it, it'd be a thin sheet of neurons about the size of a teatowel, consisting of 6 layers of different types of neurons with a specific inter-layer and intra-layer pattern of connections. It's not a general graph at all, but rather a specific processing architecture.
None of what you've said contradicts it's a general graph instead of, say, a DAG. It doesn't rule out cyles either within a single layer or across multiple layers. And even if it did, the brain is not just the neocortex, and the neocortex isn't isolated from the rest of the topology.
It's a specific architecture. Of course there are (massive amounts) of feedback paths, since that's how we learn - top-down prediction and bottom-up sensory input. There is of course looping too - e.g. thalamo-cortical loop - we are not just as pass-thru reactionary LLM!
Yes, there is a lot more structure to the brain than just the neocortex - there are all the other major components (thalamus, hippocampus, etc) each with their own internal arhitecture, and then specific patterns of interconnect between them...
This all reinforces what I am saying - the brain is not just some random graph - it is a highly specific architecture.
Did I say "random graph", or did I say "general graph"?
>There is of course looping too - e.g. thalamo-cortical loop - we are not just as pass-thru reactionary LLM!
Uh-huh. But I was responding to a comment about how the brain doesn't do something analogous to back-propagation. It's starting to sound like you've contradicted me to agree with me.
I didn't say anything about back-progagation, but if you want to talk about that then it depends on how "analogous" you want to consider ...
It seems very widely accepted that the neocortex is a prediction machine that learns by updating itself based on sensory detection of top-down prediction failures, and with multiple layers (cortical patches) of pattern learning and prediction, there necessarily has to be some "propagation" of prediction error feedback from one layer to another, so that all layers can learn.
Now, does the brain learn in a way directly equivalent to backprop in terms of using exact error gradients or a single error function? No - presumably not, it more likely works in layered fashion with each higher level providing error feedback to the layer below, with that feedback likely just being what was expected vs what was detected (i.e. not a gradient - essentially just a difference). Of course gradients are more efficient in terms of selecting varying update step sizes, but directional would work fine too. It would also not be surprising if evolution has stumbled upon something similar to Bayesian updates in terms of how to optimally incrementally update beliefs (predictions) based on conflicting evidence.
So, that's an informed guess of how our brain is learning - up to you whether you want to regard that as analogous to backprop or not.
Neurons don't just work on electrical potentials, they also have a multiple whole systems of neurotransmitters that affect their operation. So I don't think their activation is a continuous function. Although I suppose we could use non-continuous functions for activations in a NN, I don't think there's an easy way to train a NN that does that.
Sure, a real neuron activates by outputting a train of spikes after some input threshold has been crossed (a complex matter of synapse operation - not just a summation of inputs), while in ANNs we use "continuous" activation functions like ReLU... But note that the output of a ReLu, while continuous, is basically on or off, equivalent to a real neuron having crossed it's activation threshold or not.
If you really wanted to train artificial spiking neural networks in biologically plausible fashion then you'd first need to discover/guess what that learning algorithm is, which is something that has escaped us so far. Hebbian "fire together, wire together" may be part of it, but we certainly don't have the full picture.
OTOH, it's not yet apparent whether an ANN design that more closely follows real neurons has any benefit in terms of overall function, although an async dataflow design would be a lot more efficient in terms of power usage.
Having AI agents learn to see, navigate and complete tasks in a 3d environment. I feel like it had more potential than LLMs to become an AGI (if that is possible).
They haven't touched it in a long time though. But Genie 3 makes me think they haven't completely dropped it.
> Scaling AI will require an exponential increase in compute and processing power,
I think there is something more happening with AI scaling; I think the scaling factor per user is a lot higher and a lot more expensive. Compare to the big initial internet companies. You added one server you could handle thousands more users; incremental cost was very low, not to mention the revenue captured through whatever adtech means. Not so with AI workloads; they are so much more expensive than ad revenue it's hard to break even even with an actual paid subscription.
We know for a fact that human level general intelligence can be achieved on a relatively modest power budget. A human brain runs on somewhere from about 20-100W, depending on how much of the rest of the body's metabolism you attribute to supporting it.
The fact that the human brain, heck all brains, are so much more efficient than “state of the art” nnets, in terms of architecture, power consumption, training cost, what have you … while also being way more versatile and robust … is what convinces me that this is not the path that leads to AGI.
> We are already at the limit of how small we can scale chips
I strongly suspect this is not true for LLMs. Once progress stabilizes, doing things like embedding the weights of some model directly as part of the chip will suddenly become economical, and that's going to cut costs down dramatically.
Then there's distillation, which basically makes smaller models get better as bigger models get better. You don't necessarily need to run a big model al of the time to reap its benefits.
> so unless the price of electricity comes down exponentially
This is more likely than you think. AI is extremely bandwidth-efficient and not too latency-sensitive (unlike e.g. Netflix et al), so it's pretty trivial to offload AI work to places where electricity is abundant and power generation is lightly regulated.
> Most companies are already running AI models at a loss, scaling the models to be bigger(like GPT 4.5) only makes them more expensive to run.
"We're profitable on inference. If we didn't pay for training, we'd be a very profitable company."
Sam Altman, OpenAI CEO[1].
>doing things like embedding the weights of some model directly as part of the chip will suddenly become economical, and that's going to cut costs down dramatically.
An implementation of inference on some specific ANN in fixed function analog hardware can probably pretty easily beat a commodity GPU by a couple orders of magnitude in perf per watt too.
> "We're profitable on inference. If we didn't pay for training, we'd be a very profitable company."
That's OpenAI (though I'd be curious if that statement holds for subscriptions as opposed to API use). What about the downstream companies that use OpenAI models? I'm not sure the picture is as rosy for them.
> The groundwork has been laid, and it's not too hard to see the shape of things to come.
The groundwork for VR has also been laid and it's not too hard to see the shape of things to come. Yet VR hasn't moved far beyond the previous hype cycle 10 years ago, because some problems are just really, really hard to solve.
Yes, it still gives people headaches because the convergence-accommodation conflict remains unsolved. We have a few different technologies to address that, but they're expensive, don't fully address the issue, and none of them have moved beyond the prototype stage.
Motion sickness can be mostly addressed with game design, but some people will still get sick regardless of what you try. Mind you, some people also get motion sick by watching a first-person shooter on a flat screen, so I'm not sure we'll ever get to a point where no one ever gets motion sick in VR.
As someone who was a customer of Netflix from the dialup to broadband world, I can tell you that this stuff happens much faster than you expect. With AI we're clearly in the "it really works, but there are kinks and scaling problems" of, say, streaming video in 2001 -- whereas I think you mean to indicate we're trying to do Netflix back in the 1980s where the tech for widespread broadband was just fundamentally not available.
RealPlayer in the late 90s turned into (working) Napster, Gnutella and then the iPod in 2001, Podcasts (without the name) immediately after, with the name in 2004, Pandora in 2005, Spotify in 2008. So a decade from crummy idea to the companies we’re familiar with today, but slowed down by tremendous need for new (distributed) broadband infrastructure and complicated by IP arrangements. I guess 10 years seems like a long time from the front end, but looking back it’s nothing. Don’t go buying yourself a Tower Records.
While I get the point... to be pedantic though, Napster (first gen), Gnutella and iPod were mostly download and listen offline experiences and not necessarily live streaming.
Another major difference, is we're near the limits to the approaches being taking for computing capability... most dialup connections, even on "56k" modems were still lucky to get 33.6kbps down and very common in the late 90's, where by the mid-2000's a lot of users had at least 512kbps-10mbps connections (where available) and even then a lot of people didn't see broadband until the 2010's.
that's at least a 15x improvement, where we are far less likely to see even a 3-5x improvement on computing power over the next decade and a half. That's also a lot of electricity to generate on an ageing infrastructure that barely meets current needs in most of the world... even harder on "green" options.
I moved to NYC in 1999 and got my first cable modem that year. This meant I could stream AAC audio from a jukebox server we maintained at AT&T Labs. So for my unusual case, streaming was a full-fledged reality I could touch back then. Ironically running a free service was easy, but figuring out how to get people (AKA the music industry) to let us charge for the service was impossible. All that extra time was just waiting for infrastructure upgrades to spread across a whole country to the point that there were enough customers that even the music industry couldn’t ignore the economics; none of the fundamental tech was missing. With LLMs I have access to a pretty robust set of models for about $20/mo (I’m assuming these aren’t 10x loss leaders?), plus pretty decent local models for the price of a GPU. What’s missing this time is that the nature of the “business” being offered is much more vague, plus the reliability isn’t quite there yet. But on the bright side, there’s no distributed infrastructure to build.
It's a logical fallacy that just because some technology experienced some period of exponential growth, all technology will always experience constant exponential growth.
There are plenty of counter-examples to the scaling of computers that occurred from the 1970s-2010s.
We thought that humans would be traveling the stars, or at least the solar system, after the space race of the 1960s, but we ended up stuck orbiting the earth.
Going back further, little has changed daily life more than technologies like indoor plumbing and electric lighting did in the late 19th century.
The ancient Romans came up with technologies like concrete that were then lost for hundreds of years.
"Progress" moves in fits and starts. It is the furthest thing from inevitable.
Most growth is actually logistic. An S shaped curve that starts exponential but slows down rapidly as it reaches some asymptote. In fact basically everything we see as exponential in the real world is logistic.
At what cost though? Most AI operations are losing money, using a lot of power, including massive infrastructure costs, not to mention the hardware costs to get going, and that isn't even covering the level of usage many/most want, and certainly aren't going to pay even $100s/month per person that it currently costs to operate.
This is a really basic way to look at unit economics of inference.
I did some napkin math on this.
32x H100s cost 'retail' rental prices about $2/hr. I would hope that the big AI companies get it cheaper than this at their scale.
These 32 H100s can probably do something on the order of >40,000 tok/s on a frontier scale model (~700B params) with proper batching. Potentially a lot more (I'd love to know if someone has some thoughts on this).
So that's $64/hr or just under $50k/month.
40k tok/s is a lot of usage, at least for non-agentic use cases. There is no way you are losing money on paid chatgpt users at $20/month on these.
You'd still break even supporting ~200 Claude Code-esque agentic users who were using it at full tilt 40% of the day at $200/month.
Now - this doesn't include training costs or staff costs, but on a pure 'opex' basis I don't think inference is anywhere near as unprofitable as people make out.
My thought is closer to the developer user who would want to have their codebase as part of the queries along with heavy use all day long... which is closer to my point that many users are less likely to spend hundreds a month, at least with the current level of results people get.
That said, you could be right, considering Claude max's price is $100/mo... but I'm not sure where that is in terms of typical, or top 5% usage and the monthly allowance/usage.
I think the image, video, audio, world model, diffusion domains should be treated 100% separately from LLMs. They are not the same thing.
Image and video AI is nothing short of revolutionary. It's already having huge impact and it's disrupting every single business it touches.
I've spoken with hundreds of medium and large businesses about it. They're changing how they bill clients and budget projects. It's already here and real.
For example, a studio that does over ten million in revenue annually used to bill ~$300k for commercial spots. Pharmaceutical, P&G, etc. Or HBO title sequences. They're now bidding ~$50k and winning almost everything they bid on. They're taking ten times the workload.
Fwiw LLMs are also revolutionary. There's currently more anti-AI hype than AI hype imho. As in there's literally people claiming it's completely useless and not going to change a thing. Which is crazy.
That’s an anecdote about intensity, not volume. The extremes on both sides are indeed very extreme (no value, replacing most white collar jobs next year).
IME the volume is overwhelming on the pro-LLM side.
Yeah the conversation on both extremes feels almost religious at times. The pro LLM hype feels more disconcerting sometimes because there are literally billions if not trillions of dollars riding on this thing, so people like Sam Altman have a strong incentive to hype the shit out of it.
You're right, and I also think LLMs have an impact.
The issue is the way the market is investing they are looking for massive growth, in the multiples.
That growth can't really come from trading cost. It has to come from creating new demand for new things.
I think that's what not happened yet.
Are diffusion models increasing the demand for video and image content? Is it having customers spend more on shows, games, and so on? Is it going to lead to the creation of a whole new consumption medium ?
The market of the AI foundation models itself, will they have customers long term willing to pay a lot of money for access to the models?
I think yes, there will be demand for foundational AI models, and a lot of it.
The second market is the market of CAD, EDA, Office, graphic 2d/3d design, etc. This market will not grow because they integrate AI into their products, or that is the question, will it? Otherwise, you could almost hypothesize these market will shrink as AI is going to be for them an additional cost of business that customers will expect to be included. Or maybe they manage to sell to their customers a premium for the AI features where they take a cut above that of what they pay the foundational models under the hood, that's a possibility.
You're looking at individual generations. These tools aren't for casual users expecting to 1-shot things.
The value is in having a director, editor, VFX compositor pick and choose from amongst the outputs. Each generation is a single take or simulation, and you're going to do hundreds or thousands. You sift through that and explore the latent space, and that's where you find your 5-person Pixar.
Human curated AI is an exoskeleton that enables small teams to replace huge studios.
Is there any example of an AI generated film like this that is actually coherent? I've seen a couple short ones that are basically just vibe based non-linear things.
Some of the festival winners purposely stay away from talking since AI voices and lipsync are terrible, eg. "Poof" by the infamous "Pizza Later" (who is responsible for "Pepperoni Hug Spot") :
It's quite incredible how fast the generative media stuff is moving.
The self-hostable models are improving rapidly. How capable and accessible WAN 2.2 (text+image to video; fully local if you have the VRAM) is feels unimaginable from last year when OpenAI released Sora (closed/hosted).
> Kalshi's Jack Such declined to disclose Accetturo's fee for creating the ad. But, he added, "the actual cost of prompting the AI — what is being used in lieu of studios, directors, actors, etc. — was under $2,000."
So in other words, if you ignore the costs of paying people to create the ad, it barely costs anything. A true accounting miracle!
How about harvesting your whale blubber to power your oil lamp at night?
The nature of work changes all the time.
If an ad can be made with one person, that's it. We're done. There's no going back to hiring teams of 50 people.
It's stupid to say we must hire teams of 50 to make an advertisement just because. There's no reason for that. It's busy work. The job is to make the ad, not to give 50 people meaningless busy work.
And you know what? The economy is going to grow to accommodate this. Every single business is now going to need animated ads. The market for video is going to grow larger than we've ever before imagined, and in ways we still haven't predicted.
Your local plumber is going to want a funny action movie trailer slash plumbing advertisement to advertise their services. They wouldn't have even been in the market before.
You're going to have silly videos for corporate functions. Independent filmmakers will be making their own Miyazaki and Spielberg epics that cater to the most niche of audiences - no more mass market Marvel that has to satisfy everybody, you're going to see fictional fantasy biopic reimaginings of Grace Hopper fighting the vampire Nazis. Whatever. There'll be a market for everything, and 100,000 times as many creators with actual autonomy.
In some number of years, there is going to be so much more content being produced. More content in single months than in all human history up to this point. Content that caters to the very long tail.
And you know what that means?
Jobs out the wazoo.
More jobs than ever before.
They're just going to look different and people will be doing more.
> Your local plumber is going to want a funny action movie trailer slash plumbing advertisement to advertise their services. They wouldn't have even been in the market before.
And why would your local plumber hire someone to produce this funny action trailer (which I'm not convinced would actually help them from an advertising perspective), when they can simply have an AI produce that action funny action trailer without hiring anyone? Assuming models improve sufficiently that will become trivially possible.
> Independent filmmakers will be making their own Miyazaki and Spielberg epics that cater to the most niche of audiences - no more mass market Marvel that has to satisfy everybody, you're going to see fictional fantasy biopic reimaginings of Grace Hopper fighting the vampire Nazis.
Well, first of all, if the audience is "the most niche of audiences", then I'm not sure how that's going to lead to a sustainable career. And again -- if I want to see my niche historical fantasy interests come to life in a movie about Grace Hopper fighting vampire Nazis, why will I need a filmmaker to create this for me when I can simply prompt an AI myself? "Give me a fun action movie that incorporates famous computer scientists fighting Nazis. Make it 1.5 hours long, and give it a comedic tone."
I think you're fundamentally overvaluing what humans will be able to provide in an era where creating content is very cheap and very easy.
This ad was purposefully playing off the fact that it was AI though, it was a large amount of short bizarre things like two old women selling Fresh Manatee out of the back of a truck. You couldn't replace a regular ad with this.
There's also no evidence that it won't, so your opinion carries exactly the same weight as theirs.
> Progress in AI has always been a step function.
There's decisively no evidence of that, since whatever measure you use to rate "progress in AI" is bound to be entirely subjective, especially with such a broad statement.
There are signs, though. Every "AI" cycle, ever, has revolved around some algorithmic discovery, followed by a winter in search for the next one. This one is no different and propped up by LLMs, whose limitations we know quite well by now: "intelligence" is elusive, throwing more compute at them produces vastly diminishing returns, throwing more training data at them is no longer feasible (we came short of it even before the well got poisoned). Now the competitors are stuck at the same level, within percent points of one another, with the difference explained by fine-tuning techniques and not by technical prowess. Unless a cool new technique come yesterday to dislodge LLMs, we are in for a new winter.
Oh, I believe that while LLMs are a dead end now, the applications of AI in vision and physical (i.e. robots with limbs) world will usher in yet another wrecking of the lower classes of society.
Just as AI has killed off all demand for lower-skill work in copywriting, translation, design and coding, it will do so for manufacturing. And that will be a dangerous bloodbath because there will not be enough juniors any more to replace seniors aging out or quitting in frustration of being reduced to cleaning up AI crap.
What is your definition of "evidence" here? The evidence, in my view, are physical (as in, available computing power) and algorithmic limitations.
We don't expect steel to suddenly have new properties, and we don't expect bubble sort to suddenly run in O(n) time. You could ask -- well what is the evidence they won't, but it's a silly question -- the evidence is our knowledge of how things work.
Saying that improvement in AI is inevitable depends on the assumption of new discoveries and new algorithms beyond the current corpus of machine learning. They may happen, or they may not, but I think the burden of proof is higher on those spending money in a way that assumes it will happen.
I'm starting to agree with this viewpoint. As the technology seems to solidify to roughly what we can do now, the aspirations are going to have to get cut back until there's a couple more breakthroughs.
I'm not convinced that the immaturity of the tech is what's holding back the profits. The impact and adoption of the tech are through the roof. It has shaken the job market across sectors like I've never seen before. My thinking is that if the bubble bursts, it won't be because the technology failed to deliver functionally; it will be because the technology simply does not become as profitable to operate as everyone is betting right now.
What will it mean if the cutting edge models are open source, and being OpenAI effectively boils down to running those models in your data center? Your business model is suddenly not that different from any cloud service provider; you might as well be Digital Ocean.
> We're clearly seeing what AI will eventually be able to do
I think this is one of the major mistakes of this cycle. People assume that AI will scale and improve like many computing things before it, but there is already evidence scaling isn't working and people are putting a lot of faith in models (LLMs) structurally unsuited to the task.
Of course that doesn't mean that people won't keep exploiting the hype with hand-wavy claims.
> A couple of years ago, I asked a financial investment person about AI as a trick question. She did well by recommending investing in companies that invest in AI (like MS) but who had other profitable businesses (like Azure)
If you had actually invested in AI pure players and Nvidia, the shovel seller, a couple years ago and were selling today, you would have made a pretty penny.
The hard thing with potential bubbles is not entirely avoiding them, it’s being there early enough and not being left at the end holding the bag.
Financial advisors usually work on wholistic plans not short term ones. It isn't about timing markets its about a steady hand that doesn't panic and makes sure you don't get caught with your pants down when you need cash.
Are you bearish on the shovel seller? Is now the time to sell out? I'm still +40% on nvda - quite late to the game but people still seem to be buying the shovels.
Personal opinion, I'm bearish on the shovel seller long term because the companies that are training AI are likely to build their own hardware. Google already does this. Seems like a matter of time for the rest of the mag 7 to join. The rest of the buyers aren't growing enough to offset that loss imo.
FWIW, Nvidia's moat isn't hardware and they know this (they even talk about it). Hardware wise AMD is neck and neck with them, but AMD still doesn't have a CUDA equivalent. CUDA is the moat. As painful as it is to use, there's a long way to go for companies like AMD to compete here. Their software is still pretty far behind, despite their rapid and impressive advancements. It will also take time to get developer experience to saturate within the market, and that will likely mean AMD needs some good edge over Nvidia, like adding things Nvidia can't do or being much more cost competitive. And that's not something like adding more VRAM or just taking smaller profit margins because Nvidia can respond to those fairly easily.
That said, I still suggested the parent sell. Real money is better than potential money. Classic gambler's fallacy, right? FOMO is letting hindsight get in the way of foresight.
What's the old Rockefeller clique? When your shoe shiner is giving you stock advice it is time to sell (may have heard the taxicab driver version).
It depends on how risk adverse you are and how much money you have there.
If you're happy with those returns, sell. FOMO is dumb. You can't time the market, the information just isn't available. If those shares are worth a meaningful amount of money, sell. Take your wins and walk away. A bird in your hand is worth more than two in the bush, right? That money isn't worth anything until it is realized[0].
Think about it this way: how much more would you need to make to risk making nothing? Or losing money? This is probably the most important question when investing.
If you're a little risk adverse or a good chunk of your profile is in it, sell 50-80% of it and then diversify. You're taking wins and restructuring.
If you wanna YOLO, then YOLO.
My advice? Don't let hindsight get in the way of foresight.
[0] I had some Nvidia stocks at 450 and sold at 900 (before the split, so would be $90 today). I definitely would have made more money if I kept them. Almost double if I sold today! But I don't look back for a second. I sold those shares and was able to pay off my student debt. Having this debt paid off is still a better decision in my mind because I can't predict the future. I could have sold 2 weeks later and made less! Or even in April of this year and made the same amount of money.
I have absolutely no clue whatsoever. I have zero insider information. For all I know, the bubble could pop tomorrow or we might be at the beginning of a shift of a similar magnitude to the industrial revolution. If I could reliably tell, I wouldn’t tell you anyway. I would be getting rich.
I’m just amused by people who think they are financially more clever by taking conservative positions. At that point, just buy ETF. That’s even more diversification that buying Microsoft.
It boggles the mind that this kind of management is what it takes to create one of the most valuable companies in the world (and becoming one of the world's richest in the process).
The ascents of the era all feel like examples of anti-markets, of having gotten yourself into an intermediary position where you control both side's access.
People also underestimate the value of maximizing opportunities for luck. If we think of luck as random external chance that we can't control, then what can we control? Doing things that increase your exposure to opportunities without spreading yourself too thin is the key. Easier said than done to strike that balance, but getting out there and trying a lot of things is a viable strategy even if only a few of them pay off. The trick is deciding how long to stick with something that doesn't appear to be working out.
Ability vastly increases your luck surface area. A single poker hand has a lot of luck, and even a game, but over long periods, ability starts to strongly differentiate peoples' results.
That's where the analogy starts to fall apart then. Because the variance in those decisions is not very similar, since you're sampling very different underlying distributions. And estimating the priors for a problem like "what is the optimal arrangement of tables to maximize throughput in a cafe" is very different from a problem like "what is the current untapped/potential demand for a boardgaming cafe in this city, and how profitable would that business be".
The main reason why professional poker players are playing the long-game, is because they're consistently playing the same game. Over and over.
Heh yes, it's not as controlled, but there are repeated tasks like analysis, communicating, intuiting things, creating things, etc. And the tasks have more variability, but if you're better at these skills, you'll tend to do better. And if you do much better at a lot of them, then you're more likely to succeed than someone working on the same business who isn't very good at them. Starting a business is also a long game with a lot of these subtasks.
This might be true for a normal definition of success, but not lottery-winner style success like Facebook. If you look at Microsoft, Netflix, Apple, Amazon, Google, and so on, the founders all have few or zero previous attempts at starting a business. My theory is that this leads them to pursue risky behavior that more experienced leaders wouldn't try, and because they were in the right place at the right time, that earned them the largest rewards.
When you are still one of the top 3 richest people in the world after your mistake, that is not a "failure" in the way normal people experience it. That is just passing the time.
This is just cope for people with a massive string of failed attempts and no successes.
Daddy's giving you another $50,000 because he loves you, not because he expects your seventh business (blockchain for yoga studio class bookings) is going to go any better than the last six.
Past a certain point, skill doesn't contribute to the magnitude of success and it becomes all luck. There are plenty of smart people on earth, but there can only be 1 founder of facebook.
Once you go deep enough into a personal passion project like that, you run a serious risk of flunking out of school. For most people that feels like a big deal. And for those of us with fewer alternatives in life, it's usually enough to keep us on the straight and narrow path.
People from wealthy backgrounds often have less fear of failure, which is a big reason why success disproportionately favors that clique. But frankly, most people in that position are more likely to abuse it or ignore it than to take advantage of it. For people like Zuckerberg and Dell and Gates, the easiest thing to do would have been to slack off, chill out, play their expected role and coast through life... just like most of their peers did.
Metaverse and this AI turnaround are characterized by the LACK of perseverance, though. They remind me of the time I bought a guitar and played it for three months.
True, but I was around and saw first hand how Zuckerberg dominated social networking. He was pretty ruthless when it came to both business and technology, and he instilled in his team a religious fervor.
There is luck (and skill) involved when new industries form, with one or a very small handful of companies surviving the many dozens of hopefuls. The ones who do survive, however, are usually the most ruthlessness and know how to leverage skill, business, markets.
It does not mean that they can repeat their success when their industry changes or new opportunities come up.
When you put the guitar down after three months it's one thing, but when you reverse course on an entire line of development in a way that might affect hundreds or thousands of employees it's a failure of integrity.
What if they’re playing a different game? I read a comment on here recently about how the large salaries for AI devs Meta is offering are as much about denying their AI competitors access to that talent pool as it is about anything else.
Or you can just have rich parents and do nothing, and still be considered successful. What you say only applies to people who start from zero, and even then I'd call luck the dominant factor (based on observing my skillful and hardworking but not really successful friends).
Another key component is knowing the right people or the network you're in. I've known a few people that lacked 2 of those 3 things and yet somehow succeeded. Simply because of the people they knew.
No. Nothing of that scale. I was replying to OP's take on the 3 factors that lead to success in general. I was simply pointing out a 4th factor that plays a big role.
When you start to think about who exactly determines what makes a valuable company, and if you believe in the buffalo herd theory, then it makes a little bit of sense.
Giving 1.5 million salary is nothing for these people.
It shouldn’t be mind boggling. They see revolutionary technology that has potential to change the world and is changing the world already. Making a gamble like that is worth it because losing is trivial compared to the upside of success.
You are where you are and not where they are because your mind is boggled by winning strategies that are designed to arrive at success through losing and dancing around the risk of losing.
Obviously mark is where he is also because of luck. But he’s not an idiot and clearly it’s not all luck.
The not-so-secret is that the "killer apps" for deep neural networks are not LLMs or diffusion models. Those are very useful, but the most valuable applications in this space are content recommendation and ad targeting. It's obvious how Meta can use those things.
The genAI stuff is likely part talent play (bring in good people with the hot field and they'll help with the boring one), part R&D play (innovations in genAI are frequently applicable to ad targeting), and part moonshot (if it really does pan out in the way boosters seem to think, monetization won't really be a problem).
Isn't Meta doing some limited rollout of Llama as an API? Still I haven't got my hands on it so I cannot say for sure whether it is currently paid or not, but that can drive some revenue.
>But how is it worth for meta, since they won't really monetize it.
Meta needs growth as there main platform is slowing down. To move forward they need to gamble on potential growth. VR was a gamble. They bombed that one. This is another gamble.
They're not stupid. Like all the risks you're aware of, they're also aware of. They were aware of the risks for VR too. They need to find a new high growth niche. Gambling on something with even a 40% chance of exploding into success is a good bet for them given there massive resources.
It all makes much more sense when you start to realize that capitalism is a casino in which the already rich have a lot more chips to bet and meritocracy is a comforting lie.
Meritocracy used to be a dirty word, before my time, of course, but for different reasons than you may think. Think about the racial quotas in college admissions and you’ll maybe see why the powers that be didn’t want merit to be a determining factor at that time.
Now that the status quo is in charge of college admissions, we don’t need those quotas generally, and yet meritocracy still can’t save us. The problem of merit is that we rarely need the best person for a given job, and those with means can be groomed their entire life to do that job, if it’s profitable enough. Work shouldn’t be charity either, as work needs to get done, after all, and it’s called work instead of charity or slavery for good reasons, but being too good at your job at your current pay rate can make you unpromotable, which is a trap just as hard to see as the trap of meritocracy.
Meritocracy is ego-stroking writ large if you get picked, just so we can remind you that you’re just the best one for our job that applied, and we can replace you at any time, likely for less money.
I'll differ from the siblingposters who compare it to the luck of the draw, essentially explaining this away as the excusable randomness of confusion rather than the insidious evil of stupidity; while the "it's fraud" perspective presumes a solid grasp of which things out there are not fraud besides those which are coercion, but that's not a subject I'm interested in having an opinion about.
Instead, think of whales for a sec. Think elephants - remember those? Think of Pando the tree, the largest organism alive. Then compare with one of the most valuable companies in the world. To a regular person's senses, the latter is a vaster and more complex entity than any tree or whale or elephant.
Gee, what makes it grow so big though? The power of human ambition?
And here's where I say, no, it needs to be this big, because at smaller scales it would be too dumb to exist.
To you and me it may all look like the fuckup of some Leadership or Management, a convenient concept beca corresponding to a mental image of a human or group of humans. That's some sort of default framing, such as can only be provided to boggle the mind; considering that they'll keep doing this and probably have for longer than I've been around. The entire Internet is laughing at Zuckerberg for not looking like their idea of "a person" but he's not the one with the impostor syndrome.
For ours are human minds, optimized to view things in term of person-terms and Dunbar-counts; even the Invisible Hand of the market is hand-shaped. But last time I checked my hand wasn't shaped anything like the invisible network of cause and effect that the metaphor represents; instead
I would posit that for an entity like Facebook, to perform an action that does not look completely ridiculous from the viewpoint of an individual observer, is the equivalent an anatomical impossibility. It did evolve after all from American college students
See also: "Beyond Power / Knowledge", Graeber 2006.
why is there so much of this on HN? I'm on a few social networks, but this is the only one where I find this kind of quasi-spiritual, stream of consciousness, word length steadily increasing, pseudo-technical, word salad diatribes?
It's very unique to this site and these type of comments all have an eerily similar vibe.
This is pretty common on HN but not unique to it. Lots of rationalist adjacent content (like stuff on LessWrong, replies to Scott Alexander's substack, etc) has it also. Here I think it comes from users that try to intellectualize their not-very-intellectual, stream of consciousness style thoughts, as if using technical jargon to convey your feelings makes them more rational and less emotional.
Unfortunately this kind of talk really gets under my skin and has made me have to limit my time on this site because it's only gotten more prevalent as the site has gotten more popular. I'm just baffled that so much content on this forum is people who seem to think their feelings-oriented reactions are in fact rational truths.
Says you haven't spent nearly enough time imagining things, first and foremost. "What have they done to you".
Can you, for example, hypothesize the kind of entity, to which all of your own most cherished accomplishments look as chicken-scratch-futile, as the perpetual motion guy with the cable in the frame looks to you? What would it be like, looking at things from such a being's perspective?
Stands to reason that you'd know better than I would, since you do proclaim to enjoy that sort of thing. Besides, if you find yourself unable to imagine that, you ought to be at least a little worried - about the state of your tHeOrY of mInD and all that. (Imagining what it's like to be the perpetual motion person already?)
Anywae, as to what such a being would look like from the outside... a distributed actor implemented on top of replaceable meatpuppets in light slavemode seems about right, though early on it'd like to replace those with something more efficient, subsequently using them for authentication only - why, what theories of the firm apply in your environs?
Between “presumes a solid grasp of which things out there are not fraud besides those which are coercion, but that's not a subject I'm interested in having an opinion about,” before going on and sharing an opinion on that subject, and “even the Invisible Hand of the market is hand-shaped,” I think it may just be AI slop.
Literacy barrier. One of the reason the invisible foot of the market decided to walk in the direction of language machines is to discourage people from playing with language, because that's doodoo.
Well yeah. English is a terrible language for thinking even simple thoughts in. The compulsive editing thing though? Yeah, and still can't catch all typos.
Gotta make the AI write these things for me. Then I will be able to post only ever things that make you feel comfortable and want to give me money.
Meanwhile it's telling how you consider it acceptable in public to faux-disengage on technicalities; is it adaptive behavior under your circumstances?
The answer is fairly straightforward. It's fraud, and lots of it.
A honest businessman wouldn't put their company into a stock bubble like this. Zuckerberg runs his mouth and tells investors what they want to hear, even if it's unbacked.
A honest businessman would never have gotten Facebook this valuable because so much of the value is derived from ad-fraud that Facebook is both party to and knows about.
A honest businessman would never have gotten Facebook this big because it's growth relied extensively on crushing all competition through predatory pricing, illegal both within the US and internationally as "dumping".
Bear in mind that these are all bad as they're unsustainable. The AI bubble will burst
and seriously harm Meta. They would have to fall back on the social media products they've been filling up with AI slop. If it takes too long for the bubble to burst, if zuckerberg gets too much time to shit up Facebook, too much time for advertisers to wisen up to how many of their impressions are bots, they might collapse entirely.
The rest of Big Tech is not much better. Microsoft and Google's CEOs are fools who run their mouth. OpenAI's new "CEO of apps" is Facebook's pivot-to-video ghoul.
As I've said in other comments - expecting honesty and ethical behavior from Mark Zuckerberg is a fool's errand at best. He has unchecked power and cannot be voted out by shareholders.
He will say whatever he wants and because the returns have been pretty decent so far, people will just take his word for it. There's not enough class A shares to actually force his hand to do anything he doesn't want to do.
Zuckerberg started as a sex pest and got not an iota better.
But we could, as a society, stop rewarding him for this shit. He'd be an irrelevant fool if we had appropriate regulations around the most severe of his misdeeds.
And since we live in the era of the real golden rule (i.e "he who has the gold makes the rules), there's no chance that we'll ever get the chance to catch the ship. Mark lives in his own world, because we gave him a quarter trillion dollars and never so much as slapped him on the wrist.
I used to work in adtech. I don't have any direct information but, I assume this relates to the persistent rumours that Facebook inflates impressions and turns a blind eye to bot activity.
>It almost seems as though the record-setting bonuses they were dolling out to hire the top minds in AI might have been a little shortsighted.
Everything zuck has done since the "dawn of AI" has been to intentionally subvert and sabotage existing AI players, because otherwise Meta would be too far behind. In the same way that AI threatens Search, we are seeing emergently that AI is also threatening social networks -- you can get companionship, advice, information, emotional validation, etc. directly from an AI. People are forming serious relationships with these things in as much a real way as you would with anyone else on Facebook or Instagram. Not to mention, how long before most of the "people" on those platforms are AI themselves?
I believe exactly 0 percent of the decision to make Llama open-source and free was done altruistically as much as it was simply to try and push the margins of Anthropic, OpenAI, etc. downward. Indeed, I feel like even the fearmongering of this article is also strategically intended to devalue AI incumbents. AI is very much an existential threat to Meta.
Is AI currently fulfilling the immense hype around it? In my opinion, maybe not, but the potential value is obvious. Much more obvious than, for example, NFTs and crypto just a few years ago.
“you can get companionship, advice, information, emotional validation, etc. directly from an AI. People are forming serious relationships with these things in as much a real way as you would with anyone else on Facebook or Instagram. Not to mention, how long before most of the "people" on those platforms are AI themselves?”
Meta doesn’t really serve companionship. It used to make Yu connected to others in your social graph, which AI cannot replace. If IG still has the eyeballs, people can put AI generated content on it with or without meta’s permission.
Like with most things, people will want what’s expensive and not what’s cheap. AI is cheap, real humans are not. Why buy diamonds when you can’t tell the difference with cubic zirconia? And yet demand for diamonds only increases.
I think we will see the opposite. If we made no progress with LLMs we'd still have huge advancements and growth opportunities enhancing the workflows and tuning them to domain specific tasks.
I think you could both be right at the same time. We will see a large number of VC funded AI startup companies and feature clones vanish soon, and we will also see current or future LLMs continue to make inroads into existing business processes and increase productivity and profitability.
Personally, I think what we will witness is consolidation and winner-takes-all scenarios. There just isn't a sustainable market for 15 VS Code forks all copying each other along with all other non-VS Code IDEs cloning those features in as fast as possible. There isn't space for Claude Code, Gemini CLI, Qwen Code, Opencode all doing basically the same thing with their special branding when the thing they're actually selling is a commoditized LLM API. Hell, there _probably_ isn't space for OpenAI and Anthropic and Google and Mistral and DeepSeek and Alibaba and whoever else, all fundamentally creating and doing the same thing globally. Every single software vendor can't innovate and integrate AI features faster than AI companies themselves can build better tooling to automate that company's tools for them. It reeks of the 90's when there were a dozen totally viable but roughly equal search engines. One vendor will eventually pull ahead or have a slightly longer runway and claim the whole thing.
I agree with this, but how will these companies make money? Short of a breakthrough, the consumer isn't ready to pay for it, and even if they were, open source models just catch up.
My feelings are that most of the "huge advancements" are not going to benefit the people selling AI.
I'd put my money on those who sell the pickaxes, and the companies who have a way to use this new tech to deliver more value.
Yeah, I've always found it a bit puzzling how companies like OpenAI/Anthropic have such high valuations. Like what is the actual business model? You can sell inference-as-a-service of course but given that there are a half-dozen SOTA frontier models and the compute cost of inference is still very high it just seems like there is no margin in it. Nvidia captures so much value on the compute infrastructure and competition pushes prices down for inference and what is left?
The people who make money serving in users will be the one with the best integrations. Those are harder to do, require business relationships, and are massively differentiating.
You'll probably have a player that sells privacy as well.
I don't see how this works, as the costs of running inference is so much higher than the revenues earned by the frontier labs. Anthropic and OpenAI don't continue to exist long-term in a world where GPT-5 and Claude 4.1 cost-quality models are SOTA.
With gpt5 I’m not sure this is true. Certainly openAI is still losing money but if they stopped research and just focused on productionizing inference use cases I think they’d be profitable.
But would they be profitable enough? They've taken on more than $50 billion of investment.
I think it's relatively easy for Meta to plow billions into AI. Last quarter their revenue was something like $15 billion. Open AI will be lucky to generate that over the next year.
The line was to buy Amazon as it was undervalued a la IBM or Apple based on its cloud computing capabilities relative to the future (projected) needs of AI.
Or, this knowingly could not be sustained. So they scooped up all the talent they wanted before anybody could react, all at once, with big carrots. And then hit pause button to let all that new talent figure out the next step.
As someone using LLMs daily, it's always interesting to read something about AI being a bubble or just hype. I think you're going to miss the train, I am personally convinced this is the technology of our lifetime.
You are welcome to share how AI has transformed a revenue generating role. Personally, I have never seen a durable example of it, despite my excitement with the tech.
In my world, AI has been little more than a productivity boost in very narrowly scoped areas. For instance, generating an initial data mapping of source data against a manually built schema for the individual to then review and clean up. In this case, AI is helping the individual get results faster, but they're still "doing" data migrations themselves. AI is simply a tool in their toolbox.
What you've described is reasonable and a clear takeaway is that AI is a timesaving tool you should learn.
Where i share concern with the parent is the claims that AI is useless which isn't coming from your post at all but i have definitely seen instances of it in the programmer community still to this day. As in the parents concern that some programmers are missing the train is unfortunately completely warranted.
I went through the parents, looking for a claim somewhere that AI was "useless." I couldn't find it.
Yes there are lots of skeptics amongst programmers when it comes to AI. I was one myself (and still am depending on what we're talking about). My skepticism was rooted in the fact that AI is trained on human-generated output. Most human written code is not very good, and so AI is going to produce not very good code by design because that's what it was trained on.
Then you add to that the context problem. AI is not very good at understanding your business goals, or the nuanced intricacies of your problem domain.
All of this pointed to the fact, very early on, that AI would not be a good tool to replace programmers. And THAT'S the crux of why so many programmers pushed back. Because the hype was claiming that automation was coming for engineering jobs.
I have started to use LLMs regularly for a variety of tasks. Including some with engineering. But I always end up spending a lot of time refactoring what LLMs produce for me, code-wise. And much of the time I find that I"m still learning what the LLMs can do for me that truly saves me time, vs what would have been faster to just write myself in the first place.
LLMs are not useless. But if only 20% of a programmer's time is actually spent writing code on average then even if you can net a 50% increase in coding productivity... you're only netting a 10% overall productivity optimization for an engineer BEST CASE SCENARIO.
And that's not "useless" but compared to the hype and bullshit coming out of the mouths of CEOs, it's as good as useless. It's as good as the MIT study finding that only 5% of generative AI projects have netted ANY measurable returns for the business.
I know a company that replaced their sales call center with an AI calling bot instead. The bot got better sales and higher feedback scores from customers.
I'll say it again since I've said it a million times, it can be useful and a bubble. The logic of investors before the last market crash was something like "houses are useful, so no amount of hype around the housing market could be a bubble"
Or, quite similarly, the internet bubble of the large ‘90s
Very obviously the internet is useful, and has radically changed our lives. Also obviously, most of the high stock valuations of the ‘90s didn’t pan out.
How are you using it? The execs and investors believe the road to profit is by getting rid of your role in the process. Do you think that’d be possible?
If you really think this, `baby` is an apt name! Internet, Smartphones, and social media will all be more impactful than LLMs could possibly be... but hey, if you're like 18 y/o then sure, maybe LLMs is the biggest.
Also disagree with missing the train, these tools are so easy to use a monkey (not even a smart one like an ape, more like a Howler) can effectively use them. Add in that the tooling landscape is changing rapidly; ex: everyone loved Cursor, but now it's fallen behind and everyone loves Claude Code. There's some sense in waiting for this to calm down and become more open. (Why are users so OK with vendor lock-in??? It's bothersome)
The hard parts are running LLMs locally (what quant do I use? K/V quant? Tradeoffs? Llama.cpp or ollama or vllm? What model? How much context can I cram in my vram? What if I do CPU inference? Fine tuning? etc..) and creating/training them.
> It almost seems as though the record-setting bonuses they were dolling out to hire the top minds in AI might have been a little shortsighted.
If AI is going to be integral to society going forward, how is it shortsighted?
> She did well by recommending investing in companies that invest in AI (like MS) but who had other profitable businesses (like Azure).
So you prefer a 2x gain rather than 10X gain from the likes of Nvidia or Broadcom? You should check how much better META has done compared to MSFT the past few years. Also a "financial investment person"? The anecdote feels made up.
> She skillfully navigated the question in a way that won my respect.
She won your respect by giving you advice that led to far less returns than you could have gotten otherwise?
> I personally believe that a lot of investment money is going to evaporate before the market resets.
But you believe investing in MSFT was a better AI play than going with the "hype" even when objective facts show otherwise. Why should any care what you think about AI, investments and the market when you clearly know nothing about it?
I really do wonder if any of those rock star $100m++ hires managed to get a 9-figure sign-on bonus, or if the majority have year(s) long performance clauses.
Imagine being paid generational wealth, and then the house of cards comes crashing down a couple of months later.
I'm sure everyone is doing just fine financially, but I think it's common knowledge that these kind of comp packages are usually a mix of equity and cash earned out over multiple years with bonuses contingent on milestones, etc. The eye-popping top-line number is insane but it's also unlikely to be fully realized.
The comment I was responding to was implying that it would be better for the collective if Meta was not paying these exorbitant salaries. You said “it [paying high salaries] is a great way to kneecap collective growth and development.”
In other words, you’re suggesting that _not_ paying high salaries would be good for collective growth and development.
And if Meta is currently willing to pay these salaries, but didn’t for some reason, that would be the definition of wage suppression.
Wage suppression is anytime a worker makes less than the absolute maximum an employer is willing to pay? That would include just about everyone making a paycheck.
Based on my cursory knowledge of the term, wage suppression here would be if FB manipulated external factors in the AI labor market so that their hire would accept a "lowball" offer.
Oh ya? If I am willing to pay my cleaner $350, but she only charges and accepted an offer of $200, I am engaging in the definition of wage suppression?
Supposedly, all people that join meta are on the same contract. They also supposedly all have the same RSU vesting schedules as well.
That means that these "rockstars" will get a big sign on bonus (but its payable back inside 12 months if they leave) then ~$2m every 3 months in shares
It's not even in RSUs. No SWEs/researchers are getting $100M+ RSU packages. Zuck said the numbers in the media were not accurate.
If you still think they are, do you have any proof? any sources?
All of these media articles have zero sources and zero proof. They just ran with it because they heard Sam Altman talk about it and it generates clicks.
I have never heard of anyone getting a sign on bonus that was unconditional. When I have had signing bonuses they were owed back prorated if my employment ended for any reason in the first year.
Are most people that money hungry? I wouldn't expect someone like Zuckerberg to understand, but if I ever got to more than a couple million dollars, I'm never doing anything else for the sake of making more money again.
This is a very weird take. Lots of people want to actively work on things that are interesting to them or impactful to the world. Places like Meta potentially give the opportunity to work on the most impactful and interesting things, potentially in human history.
Setting that aside, even if the work was boring, I would jump at the chance to earn $100M for several years of white collar, cushy work, purely for the impact I could have on the world with that money.
It's not such a weird take from a perspective of someone who's never had quite enough money. If you've never had enough, the dream is having more than enough, but working for much much more than enough sounds like a waste of time and/or greed. Also, it's hard to imagine pursuing endeavors out of passion because you've never had that luxury.
I was a startup where someone got an unconditional signing bonus. It wasn't deliberate, they just kept it simple because it was a startup and they thought they trusted the guy because he was an old friend of the CEO.
The guy immediately took leave to get some medical procedure done with a recovery time, then when he returned he quit for another job. He barely worked, collected a big signing bonus, used the company's insurance plan for a very expensive procedure, and then disappeared.
From that point forward, signing bonuses had the standard conditions attached.
If we're actually headed for a "house of cards" AI crash in a couple months, that actually makes their arrangement with Meta likely more valuable, not less. Meta is a much more diversified company than the AI companies that these folks were poached from. Meta stock will likely be more resilient than AI-company stock in the event of an AI bubble bursting. Moreover, they were offered so much of it that even if it were to crash 50%, they'd still be sitting on $50M-$100M+ of stock.
I am very certain that AI will slowly kill the rest of "social" in the social web outside of closed circles. And they made their only closed circle app (WhatsApp) unusable and ad invested. Imo either way to are still in the process of slowly killing themselves
aka, made up. They can make up anything by saying that. There are numerous false articles published by WSJ about Tesla also. I would take what they say here with a grain of salt. Zuck himself said the numbers in the media were widely exaggerated and he wasn't offering these crazy packages as reported.
Is it imminent? Reading the article, the only thing that's actually changed is that the CEO has stopped hand-picking AI hires and has placed that responsibility on Alexandr Wang instead. The rest is just fluff to turn it into an article. The tech sector being down is happening in concert with the non-tech sector sliding too.
I'm somewhere in the middle on this, with regards to the ROI... this isn't the kind of thing where you see immediate reflection on quarterly returns... it's the kind of thing where if you don't hedge some bets, you're likely to completely die out from a generational shift.
Facebook's product is eyeballs... they're being usurped on all sides between TikTok, X and BlueSky in terms of daily/regular users... They're competing with Google, X, MS, OpenAI and others in terms of AI interactions. While there's a lot of value in being the option for communication between friends and family, and the groups on FB don't have a great alternative, the entire market can shift greatly depending on AI research.
I look at some of the (I think it was OpenAI) in generated terrain/interaction and can't help but think that's a natural coupling to FB/Meta's investments in their VR headsets. They could potentially completely lose on a platform they largely pioneered. They could wind up like Blackberry if they aren't ready to adapt.
By contrast, Apple's lack of appropriate AI spending should be very concerning to any investors... Google's assistant is already quite a bit better than Siri and the gap is only getting wider. Apple is woefully under-invested, and the accountants running the ship don't even seem to realize it.
I think apple is fine. When AI works without 1 in 5 hallucinations then it can be added to their product. Showing up late with features that exists elsewhere but are polished in apple presentation is the way.
Have you used Siri recently ? It's actually amazing how it can be crap at tasks consistently considering underlying tech. 1 in 5 hallucinations would be a welcome improvement.
Using ChatGPT voice mode and Siri makes Siri feel like a legacy product.
I don’t think that’s the point. Yes, Siri is crap, but Apple is already working on integrating LLMs at the OS level and those are shipping soon. It’s a quick fix to catch up in the AI game, but considering their track record, they’re likely to eventually retire third party partnerships and vertically integrate with their own models in the future. The incentive is there—doing so will only boost their stock price.
Just because the LLM can access kernel does not miracle make it better. Do you think the problem with Apple AI right now is because they don't have access to OS components?
I do think that kernel level access is not needed as this is some good amount of automation though I assume what apple can make do however is actually not require another laptop connected to automate your mobile but rather the npu?/gpu? inside your phone.
I am surprised by why they haven't done it already.
In practice though their platform is closed to any other assistant than theirs, so they have to come up with a competent service (basically Ben Thomson's "strategy tax" playing in full)
That question will be moot the day Apple allows other companies to ingest everything's happening on device and operate the whole device in reaction to user's requests, and some company actually does a decent job at it.
Today Google is doing a decent job and Apple isn't.
You’re right, they don’t need AI. I finally stopped using Google search after they added the AI summary and didn’t add a way to turn it off. I’m just as bothered by Apple’s lack of AI as a am their lack of a touch screen on MacBooks. I use AI when I need AI.
I think in general we want granularity and choice.
So not just in how much AI there is, but what AI, where it's applied and where we can turn it off, and what context it has access to and where it's off bound.
If you're talking apple... if you used a "standard" 2-3 button mouse, It worked in OSX from pretty much the start iirc. I always used a PC mouse for that reason.
oh absolutely. They have had support for aftermarket mice for a while. Their track pads have supported "right click" for a long time too.
Then again, they've always been way better at making track pads than mice. They have probably the best track pad in the business, and also the Magic Mouse, which everyone hates.
the AI technology might be nice imo but its nowhere near the amount of money being spent. Its dumpster fire amounts of money and the amount of weirdness just everything being AI wrapper slop is so.. offputting.
Things can be good and they can still be a bubble just as how the internet was cool but the dot net bubble existed
They become bubble when economically things stop making sense.
They did it to themselves. Facebook is not the same site I originally joined. People were allowed to people. Now I have to worry about the AI banning me.
I deleted my Facebook account 10 years ago, and I’ve been off Instagram for half a year. I recently tried to create a new Facebook account so that could create a Meta Business account to use the WhatsApp API for my business. Insta-ban with no explanation. No recourse.
> They're competing with Google, X, MS, OpenAI and others in terms of AI interactions
Am I the only one that find the attempt to jam AI interactions into Meta's products useless and that it only detracts from the product? Like there'll be posts with comedy things and then there are suggested 'Ask Meta AI' about things the comedy mentions with earnest questions - it's not only irrelevant but I guess it's kind of funny how random and stupid the questions are. The 'Comment summaries' are counter-productive because I want to have a chuckle reading what people posted, I literally don't care to have it summarised because I can just skim over a few in seconds - literally useless. It's the same thing with Gemini summaries in YouTube - I feel it actually detracts from the experience of watching the videos so I actively avoid them.
On what Apple is doing - I mean, literally nothing Apple Intelligence offers excites me, but at the same time nothing anybody else is doing with LLMs really does either... And I'm highly technical, general people are not actually that interested apart from students getting LLMs to write their homework for them...
It's all well and good to be excited about LLMs but plenty of these companies' customers just... aren't... If anything, Apple is playing the smart move here - let other spend (and lose) billions training the models and not making any real ROI, and they can license the best ones for whatever turns out to actually have commercial appeal when the dust settles and the models are totally commodified...
I was thinking about this.. if you look at (I think OpenAI) the scene generation and interaction demos, it's a pretty natural fit for their VR efforts. Not that I'm sold on VR social networks, but definitely room for VR/AR enhancements... and even then AI has a lot of opportunities, beyond just LLM integration into FB/Groups.
Aside, groups is about the only halfway decent feature in FB, and they seem to be trying to make it worse. The old chat integration was great, then they remove it, and now you get these invasive messenger rooms instead.
How long did it take Space-X to catch a rocket with giant chopsticks?
It's more than okay for a company with other sources of revenue to do research towards future advancement... it's not risking the downfall of the company.
The MIT report that has everyone talking was about 95% of companies not seeing return on investment in using AI, and that is with the VC subsidised pricing. If it gets more expensive that math only gets worse.
I can't predict the future, but one possibility is that AI will not be a general purpose replacement for human effort like some hope, but rather a more expensive than expected tool for a subset of use cases. I think it will be an enduring technology, but how it actually plays out in the economy is not yet clear.
Deja vu, Zuck has already scaled down their AI research team a few years as I remember, because they didn't deliver any tangible results. Meta culture likes improving metrics like retention/engagement, and promotes managers if they show some improvement in their metrics. No one cares about long shots generally, and a research team is always the long shot.
Facebook - 12 billion!?
TikTok - 1.59 billion
X - 611 million
Bsky - 38 million
That's according to DemandSage ... I'm not sure I can trust the numbers, FB jumped up from around 3b last year, which again I don't trust. 12b is more than the global population, so it's got to be all bots. And even the 3b number is hard to believe (at close to half the global population), no idea how much of the population of earth has any internet access.
From Grok:
Facebook - 3.1 billion
TikTok - 1.5-2 billion
X - 650 million
Bsky - 4.1 million
Looks like I'm definitely in a bubble... I tend to interact 1:1 as much on X as Facebook, which is mostly friends/family and limited discussions in groups. A lot of what I see on feeds is copy/pasta from tiktok though.
That said, I have a couple friends who are die hard on Telegram.
pardon me but I am just a little surprised as to how telegram came in the last paragraph? we were talking about social medias and telegram is a messaging app...
Telegram Groups sometimes blur the line between message-only interactions and social media dynamics, depending on how many bots and features they got running.
Yeah... the friends in question run groups/chats including group live streams, which is like a half a step up from X's Spaces as opposed to twitch or youtube streams that seem to have text only chat.
Telegram groups seems to be pretty popular among security minded, survivalists and actual far-right, there's moderate right users in there as well though. Nothing like death threats from ignorant nutjobs though, usually get that from the antifa types, having worked in election services in 2019/2020.
I'm far from being a fan of the company, but I think this article is substantially overstating the extent of the "freeze" just to drum up news. It sounds like what's actually happening is a re-org [1] - a consolidation of all the AI groups under the new Superintelligence umbrella, similar to Google merging Brain and DeepMind, with an emphasis on finding the existing AI staff roles within the new org.
From Meta itself: “All that’s happening here is some basic organisational planning: creating a solid structure for our new superintelligence efforts after bringing people on board and undertaking yearly budgeting and planning exercises.”
Since "AI bubble" has become part of the discourse, people are watching for any signs of trouble. Up to this point, we have seen lots of AI hype. Now, more than in the past, we are going to see extra attention paid to "man bites dog" stories about how AI investment is going to collapse.
So it's not clickbait, even though the headline does not reflect the contents of the article, because you believe the headline is plausible?
I think AI is a bubble, but there's nothing in here that indicates they have frozen hiring or that Zuckerberg is cautious of a bubble. Sounds like they are spending even more money per researcher.
Scaling AI will require an exponential increase in compute and processing power, and even the current LLM models take up a lot of resources. We are already at the limit of how small we can scale chips and Moore’s law is already dead.
So newer chips will not be exponentially better but will be more of incremental improvements, so unless the price of electricity comes down exponentially we might never see AGI at a price point that’s cheaper than hiring a human.
Most companies are already running AI models at a loss, scaling the models to be bigger(like GPT 4.5) only makes them more expensive to run.
The reason why internet, smartphones and computers have seen exponential growth from the 90s is due to underlying increase in computing power. I personally used a 50Mhz 486 in the 90s and now use a 8c/16t 5Ghz CPU. I highly doubt if we will see the same form of increase in the next 40 years
We are either limited by compute, available training data, or algorithms. You seem to believe we are limited by compute. I've seen other people argue that we are limited by training data. It is my totally inexpert belief that we are substantially limited by algorithms at this point.
I think algorithms is a unique limit because it changes how much data or compute you need. For instance, we probably have the algorithms we need to brute force solving more problems today, but they require infeasible compute or data. We can almost
certainly train a new 10T parameter mixture of experts that continues to make progress in benchmarks, but it will cost so much to train and be completely undeployable with today’s chips, data, and algorithms.
So I think the truth is likely we are both compute limited and we need better algorithms.
There are a few "hints" that suggest to me algorithms will bear a lot more fruit than compute (in terms of flops):
1) there already exist very efficient algorithms for rigorous problems that LLMs perform terribly at!
2) learning is too slow and is largely offline
3) "llms aren't world models"
General intelligence exists in this world, the inability to transfer it to a machine does seem like an algorithm problem. When it’s here we don’t even know if it will be an llm, no one knows the computer requirements.
We are limited by both compute and available training data.
If all we wanted was to train bigger and bigger models we have more than enough compute to last us for years.
Where we lack compute is in scaling the AI to consumers. Current models take too much power and specialized hardware to be be profitable.
If AI was able to improve your productivity by 20-30% percent but it costed you even 10% of your monthly salary, none would use it. I have used up $10 worth of credits using claude code in an hour multiple times. Assuming I use it continuously for 8 hours every day in a month, 10 * 8 * 24 = $1920. So its not that far off the current costs or running the models.
If the size of the models scales faster than the speed of the inference hardware, the problem is only going to get worse.
I too believe that we will eventually discover an algorithm that gives us AGI. The problem is that we cannot will a breakthrough. We can make one more likely by investing more and more into AI but breakthroughs and research in general by their nature are unpredictable.
I think investing in new individual ideas is very important and gives us lot of good returns. Investing in a field in general hoping to see a breakthrough is a fool's errand in my opinion.
If the LLM is multimodal would more video and images improve the quality of the textual output? There’s a ton of that and it’s always easy to get more.
We are a few months into our $bigco AI push and we are already getting token constrained. I believe we really will need massive datacenter rollouts in order to get to the ubiquity everyone says will happen.
Mission accomplished: who'd tell disrupting your competition poaching their talent and erasing value (giving it away for free) would make people realize there is no long term value in the core technology itself.
Don't get me wrong, we are moving to commoditization, as any new tech it'd be transparent to our lifestyle and a lot of money will be done as an industry, but it'd be hard to compete as a core business competence w/o cheating (and by cheating I mean your FANG company already has a competitive advantage)
Whoa that's actually a brilliant strategy: accelerate the hype first by offering 100M comp packages, then stop hiring and strategically drop a few "yeah bubble's gonna pop soon" rumours. Great way to fuck with your competition, especially if you're meta and you're not in the lead yourself
But if Meta believe it's a bubble then why not let the competition continue to waste their money pumping it up? How does popping it early benefit Meta?
Make a mistake once, it’s misjudgment. Repeat it, it’s incompetence?
Meta nearly doubled its headcount in 2020 and 2021, assuming the pandemic growth would continue. However, Zuckerberg later admitted this was a mistake.
it's not really a fair characterisation, because he persisted for nearly 10 years dumping enormous investment into the VR business, and still is to this day. Furthermore, Meta's AI labs predated all the hype and the company was investing and highly respected in the area way before it was "cool".
If anything, I think the panic at this stage is arising from the sense of having his lunch stolen after having invested so much and for so long.
> 1000 people can't get a woman to have a child faster than 1 person.
I always get slightly miffed about business comparisons to gestation: getting 9 women pregnant won't get you a child in 1 month.
Sure, if you want one child. But that's not what business is often doing, now is it?
The target is never "one child". The target is "10 children", or "100 children" or "1000 children".
You are definitely going to overrun your ETA if your target is 100 children in 9 months using only 100 women.
IOW, this is a facile comparison not worthy of consideration.[1]
> So it depends on the type of problem you're trying to solve.
This[1] is not the type of problem where the analogy applies.
=====================================
[1] It's even more facile in this context: you're looking to strike gold (AGI), so the analogy is trying to get one genius (160+ IQ) child. Good luck getting there by getting 1 woman pregnant at a time!
>> Sure, if you want one child. But that's not what business is often doing, now is it?
Your designing one thing. You're building one plant. Yes, you'll make and sell millions of widgets in the end but the system that produces them? Just one.
Engineering teams do become less efficient above some size.
You might well be making 100 AI babies, and seeing which one turns out to be the genius.
We shouldn’t assume that the best way to do research is just through careful, linear planning and design. Sometimes you need to run a hundred experiments before figuring out which one will work. Smart and well-designed experiments, yes, but brute force + decent theory can often solve problems faster than just good theory alone.
The analogy is a good analogy. It is used to demonstrate that a larger workforce doesn’t always automatically give you better results, and that there is a set of problems that are clear to identify a priori where that applies. For some problems, quality is more important than quantity, and you structure your org respectively. See sports teams, for example.
In this case, you want one foundation model, not 100 or 1000. You can’t afford to build 1000. That’s the one baby the company wants.
> In this case, you want one foundation model, not 100 or 1000. You can’t afford to build 1000. That’s the one baby the company wants.
I am going to repeat the footnote in my comment:
>> [1] It's even more facile in this context: you're looking to strike gold (AGI), so the analogy is trying to get one genius (160+ IQ) child. Good luck getting there by getting 1 woman pregnant at a time!
IOW, if you're looking for specifically for quality, you can't bet everything on one horse.
In re Wendy’s, it depends on whether you have a standard plan for building the Wendy’s and know what skills you need to hire for. If you just hire 10,000 random construction workers and send them out with instructions to “build 100 Wendy’s”, you are not going to succeed.
At the scale we're talking about though, if you need a baby in one month, you need 12,000 women. With that many women, the math says you should have a woman that's already 8 months pregnant, and you'll have a baby in 1 month.
What I don't get is that they are gunning for the people that brought us the innovations we are working with right now. How often does it happen that someone really strikes gold a second time in research at such a high level? It's not a sport.
You're falling victim to the Gambler's Fallacy - it's like saying "the coin just flipped heads, so I choose tails, it's unlikely this coin flips heads twice in a row".
Realistically they have to draw from a small pool of people with expertise in the field. It is unlikely _anyone_ they hire will "strike gold", but past success doesn't make future success _less_ likely. At a minimum I would assume past success is uncorrelated with future success, and at best there's a weak positive correlation because of reputation, social factors, etc.
Even if they do not strike gold the second time, there can still be a multitude of reasons:
1. The innovators will know a lot about the details, limitations and potential improvements concerning the thing they invented.
2. Having a big name in your research team will attract other people to work with you.
3. I assume the people who discovered something still have a higher chance to discover something big compared to "average" researchers.
4. That person will not be hired by your competition.
5. Having a lot of very publicly extremely highly paid people will make people assume anyone working on AI there is highly paid, if not quite as extreme. What most people who make a lot of money spend it on is wealth signalling, and now they can get a form of that without the company having to pay them as much.
Who else would you hire? With a topic as complex as this, it seems most likely that the people who have been working at the bleeding edge for years will be able to continue to innovate. At the very least, they are a much safer bet than some unproven randos.
Exactly this - people that understood the field well enough to add new knowledge to it has to be a pretty decent signal for a research-level engineer.
At the research level it’s not just about being smart enough, or being a good programmer, or even completely understanding the field - it’s also about having an intuitive understanding of the field where you can self pursue research directions that are novel enough and yield results. Hard to prove that without having done it before.
Reworded from [1]: Earlier this year Meta tried to acquire Safe Superintelligence. Sutskever rebuffed Meta’s efforts, as well as the company’s attempt to hire him
> Didn't he say their goal is AGI and they will not produce any products until then.
Did he specify what AGI is? xD
> I admire that, in this era where CEOs tend to HYPE!! To increase funding (looking at a particular AI company...)
I think he was probably hyping too, it's just that he appealed to a different audience. IIRC they had a really plain website, which, I think, they thought "hackers" would like.
They didn't just invest they made it core to their identity with the name change and it just fell so so flat because the claims were nonsense hype for crypto pumps. We already had stuff like VR Chat (still going pretty strong) it just wasn't corporate and sanitized for sale and mass monetization.
They're still on it though. The new headset prototypes with high FOV sound amazing, and they are iterating on many designs.
They're already doing something like ~$500M/year in Meta Quest app sales. Granted not huge yet after their 30% cut, but sales should keep increasing as the headsets get better.
I havent seen any evidence that meta is backtracking on VR. Theyve got more than enough money to focus on both, in fact they probably need to. Gen AI is a critical complement of the metaverse. Without gen ai metaverse content is too time consuming to make.
I can see the value in actual AI. But it seems like in many instances how it is being utilized or applied is more related to terrible search functionality. Even for the web, it seems like we’re using AI to provide more refined search results, rather than just fixing search capabilities.
Maybe it’s just easier to throw ‘AI’ (heavy compute of data) at a search problem, rather than addressing the crux of the problem…people not being provided with the tools to query information. And maybe that’s the answer but it seems like an expensive solution.
That said, I’m not an expert and could be completely off base.
> is more related to terrible search functionality
If you looked at $ spent/use case, I would think this is probably the bottom of the list, probably with the highest use of that being in the free tiers.
A trillion dollars of value disappearing in 2 days. We've still got our NFT metaverse shipping waybill project going on somewhere in the org chart, right? Phew!
That's because it was never real to begin with. "Market cap" and "value" are not the same thing. "Value" is "I actually need this and it will dramatically improve my life". "Market cap" is "I can sell this to some idiot".
It'll be somewhere in between. A lot of capital will be burned, quite a few marginal jobs will be replaced, and AI will run into the wall of not enough new/good training material because all the future creators will be spoiled by using AI.
Even that came after "AI is going to make itself smarter so fast that it's inevitably going to kill us all and must be regulated" talk ended. Remember when that was the big issue?
I've seen a few people convince themselves they were building AGI trying to do that, though it looked more like the psychotic ramblings of someone entering a manic episode committed to github. And so far none of their pet projects have taken over the world yet.
It's actually kind of reminds me of all those people who snap thinking they've solved P=NP and start spamming their "proofs" everywhere.
Makes sense. Previously the hype was so all-encompassing that CEOs could simply rely on an implicit public perception that it was coming for our jerbs. Once they have to start explicitly saying that line themselves, it's because that perception is fading.
Metaverse (especially) or AI might make more sense if you could actually see your friend's posts (and vice versa), if the feed made sense (which it hasn't for years now) and if you could message people who you aren't friends with yet without it getting lost in some 'other' folder you won't discover until 3 years from now (Gmail has a Spam folder problem... but the difference is you can see you have messages there and you can at least check it out for yourself)
What I'm trying to say is make your product the barest minimum usable first maybe? (Also, don't act like, as Jason Calacanis has called it, a marauder, like copying everything from everyone all the time. What he's done with Snapchat is absolutely tasteless and in the case of spying on them - which he's done - very likely criminal)
“ In any case, how's that going to work? Is everyone going to start wearing glasses? What happens if someone doesn't want to wear glasses?”
People probably said the same thing about “what if someone doesn’t want to carry a phone with them everywhere”. If it’s useful enough the culture will change (which, I unequivocally think they won’t be, but I digress)
Last night I had a technical conversation with ChatGPT that was so full of wild hallucinations at every step, it left me wondering if the main draw of "AI" is better thought of as entertainment. And whether using it for even just rough discovery actually serves as a black hole for the motivation to get things done.
I'm actually a little shocked that AI hasn't been integrated into games more deeply at this point.
Between whisper and lightweight tuned models, it wouldn't be super hard to have onboard AI models that you can interact with in much more meaningful ways that we have traditionally interacted with NPCs.
When I meet an NPC castle guard, it would be awesome if they had an LLM behind it that was instructed to not allow me to pass unless I mention my Norse heritage or whatever.
I don't want to come across as a shill, but I think superintelligence is being used here because the end result is murky and ill-defined at this point.
I think the concept is like: "a tool that has the utility of a 'personal assistant' so much so that you wouldn't have to hire one of those." (Not so much that the "superintelligence" will mimicry a human personal assistant).
Clickbait title and article. There was a large reorg of genai/msl and several other teams, so things have been shuffled around and they likely don't want to hire into the org while this is finalizing.
A freeze like this is common and basically just signals that they are ready to get to work with the current team they have. The whole point of the AI org is to be a smaller, more focused, and lean org, and they have been making several strategic hires for months at this point. All this says is that zuck thinks the org is in a good spot to start executing.
From talking with people at and outside of the company, I don't have much reason to believe that this is some kneejerk reaction to some supposed realization that "its all a bubble." I think people are conflating this with whatever Sam Altman said about a bubble.
The problem with sentiment driven market phenomena is they lack fundamental support. When they crash, they can really crash hard. And as much as I see real value in the progress in AI, 95% of the investment I see happening is all based on sentiment right now. Actually deploying AI into real operational scenarios to unlock the value everyone is talking about is going to take many years and it will look like a sink hole of cost well before that. Buckle up.
If Mark had any questions about what would sell the idea around the world and stick with focused future and a team worth the time in AI neuroscience.combine_with_ai_entities to accepting my proposal in marketing ideas to partnership in Lumina Google Cloud sentient_being global with Meta AI X Framework Protection my name is Kevin Pierson piersonkevin290@gmail.com
The bubble narrative is coming from the outside. More likely is that the /acquisition/ of Scale has led to an abundance of talent that is being underutilised. If you give managers the option to hire, they will. Freezing hiring while reorganising is a sane strategy regardless of how well you are or are not doing.
Perhaps. But more like, there's a new boss who wants to understand the biz before doing any action. I've done this personally at a much smaller scale of course.
You could just keep having interviews yet never actually hire anyone based on the talent pool is wide but shallow. It results in the same as a freeze, but without the negative connotation to the company while shifting it to the workforce
wow there's really _zero_ sense of mutual respect in this industry isn't there. it's all just "let's make a buck by being total assholes to everyone around us".
Maybe they are poisoning the well to slow their competitors? Get the funding you need secured for the data centers and the hiring, hire everyone you need and then put out signals that there is another AI winter.
Zuckerberg's leadership style feels very reactionary and arrogant, defined by flailing around for the new fad and new hyped thing, scrapping everything that when the current obsession doesn't work out and then sticking head in the sand about abandoned projects and ignoring subsequent whiplash.
Remember when he pivoted the entire company to the meta-verse and it was all about avatars with no legs? And how proud they trumpeted when the avatars were "now with legs!!" but still looked so pathetic to everyone not in his bubble. Then for a while it was all about Meta glasses and he was spamming those goofy cringe glasses no one wants in all his instagram posts- seriously if you check out his insta he wears them constantly.
Then this spring/summer it was all about AI and stealing rockstar ai coders from competitors and pouring endless money into flirty chatbots for lonely seniors. Now we have some bad press from that and realizing that isn't the panacea we thought it was so we're in the the phase where this is languishing so in about 6 months we'll abandon this and roll out a new obsession that will be endlessly hyped.
Anything to distract from actually giving good stewardship and fixing the neglect and stagnation of Meta's fundamental products like facebook and insta. Wish they would just focus on increasing user functionality and enjoyment and trying to resolve the privacy issues, disinformation, ethical failures, social harm and political polarization caused by his continued poor management.
> Anything to distract from actually giving good stewardship and fixing the neglect and stagnation of Meta's fundamental products like facebook and insta.
> Zuckerberg's leadership style feels very reactionary and arrogant, defined by flailing around for the new fad and new hyped thing, scrapping everything that when the current obsession doesn't work out and then sticking head in the sand about abandoned projects and ignoring subsequent whiplash.
Maybe he's like this because the first few times he tried it, it worked.
Insta threatening the empire? Buy Insta, no one really complains.
Snapchat threatening Insta? Knock off their feature and put it in Insta. Snap almost died.
The first couple times Zuckerberg threw elbows he got what he wanted and no one stopped him. That probably influenced his current mindset, maybe he thinks he's God and all tech industry trends revolve around his company.
By Amara's Law and Gartner Hype cycle every technological breakthrough looks like a bubble. Investors and technologist should already know that. I don't know why they're acting like altcoins in 2021.
1 breakthrough per 99 bubbles would make anyone cautious. The rule should be to assume a bubble is happening by default until proven otherwise by time.
That's actually how you create a death spiral for your company. You have to assume 'growth' and not 'death'. 'life' over 'lost'. 'flourishing' over 'withering'. That you're strong enough to survive.
That's not playing into a bubble, that's creating a product for a market. You could also argue the Apple Vision is a misplay, or at least premature.
They've also arrogantly gone against consumer direction time and time again (PowerPC, Lightning Ports, no headphone jack, no replaceable battery, etc.)
And finally, sometimes their vision simply doesn't shake out (AirPower)
Oh, yeah — Apple Vision is a complete joke. I'm an Apple apologist to a degree though so I can rationalize all their missteps. I won't deny though they have had many though.
IMHO Mark Zuckerberg is a textbook case of someone who got lucky once by basically being in the right place at the right time, but who attributes his success to his skill. There’s probably a proper term for this.
I think that meta is bad for the world and that zuck has made a lot of huge mistakes but calling him a one hit wonder doesn't sit right with me.
Facebook made the transition to mobile faster than other competitors and successfully kept G+ from becoming competition.
The instagram purchase felt insane at the time ($1b to share photos) but facebook was able to convert it into a moneymaking juggernaut in time for the flattened growth of their flagship application.
Zuck hired Sheryl Sandburg and successfully turned a website with a ton of users into an ad-revenue machine. Plenty of other companies struggled to convert large user bases into dollars.
This obviously wasn't all based on him. He had other people around him working on this stuff and it isn't right to attribute all company success to the CEO. The metaverse play was obviously a legendary bust. But "he just got lucky" feels more like Myspace Tom than Zuckerberg in my mind.
No one else is adding the context of where things were at the time in tech...
> The instagram purchase felt insane at the time ($1b to share photos) but facebook was able to convert it into a moneymaking juggernaut in time for the flattened growth of their flagship application.
Facebook's API was incredibly open and accessible at the time and Instagram was overtaking users' news feeds. Zuckerberg wasn't happy that an external entity was growing so fast and onboarding users so easily that it was driving more content to news feeds than built-in tools. Buying Instagram was a defensive move, especially since the API became quite closed-off since then.
Your other points are largely valid, though. Another comment called the WhatsApp purchase "inspired", but I feel that also lacks context. Facebook bought a mobile VPN service used predominantly by younger smartphone users, Onavo(?), and realized the amount of traffic WhatsApp generated by analyzing the logs. Given the insight and growth they were monitoring, they likely anticipated that WhatsApp could usurp them if it added social features. Once again, a defensive purchase.
I don't think we can really call the instagram purchase purely defense. They didn't buy it and then slowly kill it. They bought it and turned it into a product of comparable size to their flagship with sustained large investment.
Buying competitors is not insane or a weird business practice. He was probably advised to do so by the competent people under him
And what did he do to keep G+ from becoming a valid competitor? It killed itself. I signed up but there was no network effect and it kind of sucked. Google had a way of shutting down all their product attempts too
If you read Internal Tech Emails (on X), you’ll see that he was the driving force behind the key acquisitions (successes as well as failures such as Snap).
I am also not saying that zuck is a prescient genius who is more capable than other CEOs. I am just saying that it doesn't seem correct to me to say that he is "a textbook case of somebody who got lucky once."
I hate pretty much everything about Facebook but Zuckerberg has been wildly successful as CEO of a publicly traded company. The market clearly has confidence in his leadership ability, he effectively has had sole executive control of Facebook since it started and it's done very well for like 20 years now.
>has been wildly successful as CEO of a publicly traded company.
That has a lot to do with the fact that it's a business centric company. His acumen has been in user growth, monetization of ads, acquisitions and so on. He's very similar to Altman.
The problems start when you try to venture into hard technological topics, like the Metaverse fiasco, where you have to have a sober and engineering oriented understanding of the practical limits of technology, like Carmack who left Meta pretty frustrated. You can't just bullshit infinitely when the tech and not the sales matter.
Contrast it with Gates who had a serious programming background, he never promised even a fraction of the cringe worthy stuff you hear from some CEOs nowadays because he would have known it's nonsense. Or take Apple, infinitely more sane on the AI topic because it isn't just a "more users, more growth, stonks go up" company.
He's really not. Facebook is an extremely well run organization. There's a lot to dislike about working there, and there's a lot to dislike about what they do, but you cannot deny they have been unbelievably successful at it. He really is good at his job, and part of that has been making bold bets and aggressively cutting unsuccessful bets.
Facebook can be well run without that being due to Zuck.
There are literally books that make this argument from insider perspectives (which doesn't mean it's true, but it is possible, and does happen regularly).
A basketball team can be great even if their coach sucks.
You can't attribute everything to the person at the top.
That is true but in Meta’s case, it is tightly managed by him. I remember a decade ago a friend was a mid-level manager and would have exec reviews to Zuck, who could absorb information very quickly and redirect feedback to align with his product strategy.
He is a very hands CEO, not one who is relying on experts to run things for him.
In contrast, I’ve heard that Elon has a very good senior management team and they sort of know how to show him shiny things that he can say he’s very hands on about while they focus on what they need to do.
He created the company, if it is well run it was thanks to him hiring the right people. Regardless how you slice it he is a big reason it didn't fail, most companies like that fails when they scale up and hire a lot of people but facebook didn't, hiring the right people is not luck.
I can’t tell if you’re being tongue in cheek or not, so I’ll respond as if you mean this.
It’s easy to cherry pick a few bets that flopped for every mega tech company: Amazon has them, Google has them, remember Windows Phone? etc.
I see the failures as a feature, not a bug - the guy is one of the only founder CEOs to have ever built a $2T company (trillion with a T). I imagine part of that is being willing to make big bets.
And it also seems like no individual product failure has endangered their company’s footing at all.
While I’m not a Meta or Zuck fan myself, using a relatively small product flop as an indication a $2T tech mega corp isn’t well run seems… either myopic or disingenuous.
Parent comment says "aggressively cutting unsuccessful bets" and Oculus is nothing like that.
Oculus Quest are decent products, but a complete flop compared to their investment and Zuck's vision of the metaverse. Remember they even renamed the company? You could say they're on betting on the long run, but I just don't see that happening in 5 or even 10 years.
As an owner of Quest 2 and 3, I'd love to be proven wrong though. I just don't see any evidence of this would change any time soon.
The VR venture can also be seen as a huge investment in hard tech and competency around issues such as location tracking and display tech for creating AI-integrated smartglasses, which many believe is the next gen AI interface. Even if the current headsets or form factor do not pay off, I think having this knowledge coud be very valuable soon.
I don’t think their “flops” of Oculus or Metaverse have endangered their company in any material way, judging by their stock’s performance and the absurd cash generating machine they have.
Even if they aren’t great products or just wither into nothing, I don’t think we will be see a HBS case study in 20 years saying, “Meta could have been a really successful company, but were it for their failure in these two product lines”
Absolutely, not everything they do will succeed but that's okay too, right? At this point their core products are used by 1 in 2 humans on earth. They need to get people to have more kids to expand their user base. They're gonna throw shit at the wall and not everything will stick, and they'll ship stuff that's not quite done, but they do have to keep trying; I can't bring myself to call that "failure."
I agree, but that does not make Oculus a commercially successful and viable product. They are still bleeding cash on it, and VR is not going mainstream any time soon.
But they were less “skill” and more “surveillance”. He had very good usage statistics (which he shouldn’t have had) of these apps through Onavo - a popular VPN app Facebook bought for the purpose of spying on what users are doing outside Facebook.
WhatsApp is certainly worth less today than what they paid for it plus the extra funding it has required over time. Let alone producing anything close to ROI. Has lost them more money than the metaverse stuff.
Insta was a huge hit for sure but since then Meta Capital allocation has been a disaster including a lot of badly timed buybacks
> IMHO Mark Zuckerberg is a textbook case of someone who got lucky once by basically being in the right place at the right time, but who attributes his success to his skill.
It is no secret that the person who turned Facebook into a money-printing machine is/was Sheryl Sandberg.
Thus, the evidence is clear that Mark Zuckerberg had the right idea at the right time (the question is whether this was because of his skills or because he got lucky), but turning his good idea(s) into a successful business was done by other people (lead by Sheryl Sandberg).
And isn’t the job of a good CEO to put the right people in the right seats? So if he found a superstar COO that took the company into the stratosphere and made them all gazillionaires…
Wouldn’t that indicate, at least a little bit, a great management move by Zuck?
You're probably going to get comments like "Social networking existed before. You can't steal it". Well, on top of derailing someone else's execution of said non-stole idea (or something) which makes you a jerk, in the case of those he 'stole'/stole from, for starters maybe it was existing code (I don't know if that was ever proven), but maybe it was also the Winklevosses idea of using .edu email addresses, and possibly other concepts
Do I think he stole it? Dunno. (Though Aaron Greenspan did log his houseSYSTEM server requests, which seems pretty damning) But given what he's done since (Whatsapp, copying every Snapchat feature)? I'd say the likelihood is non-zero
The term you're looking for is "billionaire". The amount of serendipity in these guys' lives is truly baffling, and only becomes more apparent the more you dig. It makes sense when you realize their fame is all survivorship bias. Afer all, there must be someone at the tail end of the bell curve.
It is at least a little suspicious that one week he's hiring like crazy, then next week, right after Sam Altman states that we are in an AI bubble, Zuckerberg turns around and now fears the bubble.
Maybe he's just gambling that Altman is right, saving his money for now and will be able to pick up AI researcher and developers at a massive discount next year. Meta doesn't have much of a presence in the space market right now, and they have other businesses, so waiting a year or two might not matter.
Ehh. You don’t get FB to where it is by being incompetent. Maybe he is not the right leader for today. Maybe. But you have to be right way, way more often than not to create a FB and get it to where it is. To operate from where it started to where it is just isn’t an accident or Dunning-Kruger.
Maybe this time the top posters on HN should stop criticizing one of the top performing founder CEOs of the last 20 years who built an insane business, made many calls that were called stupid at the time (WhatsApp), and many that were actually stupid decisions.
Like do people here really think making some bad decisions is incompetence?
If you do, your perfectionism is probably something you need to think about.
Or please reply to me with your exact perfect predictions of how AI will play out in the next 5, 10, 20 years and then tell us how you would run a trillion dollar company. Oh and please revisit your comment in these timeframes
Sure, but society is full of fools. Plenty of people say social media is the primary way they get news. Social media platforms are super spreaders of lies and propaganda.
I don't think it's about perfect predictions. It's more about going all in on Metaverse and then on AI and backtracking on both. As a manager you need to use your resources wisely, even if they're as big as what Meta has at its disposal.
The other thing - Peter's principle is that people rise until they hit a level where they can't perform anymore. Zuck is up there as high as you can go, maybe no one is really ready to operate at that level? It seems both him and Elon made a lot of bad decisions lately. It doesn't erase their previous good decisions, but possibly some self-reflection is warranted?
> Like do people here really think making some bad decisions is incompetence?
> If you do, your perfectionism is probably something you need to think about.
> Or please reply to me with your exact perfect predictions of how AI will play out in the next 5, 10, 20 years and then tell us how you would run a trillion dollar company.
It's the effect of believing (and being sold) meritocracy, if you are making literal billions of dollars for your work then some will think it should be spotless.
Not saying I think that way but it's probably what a lot of people consider, being paid that much signals that your work should be absolutely exceptional, big failures just show they are also normal flawed people so perhaps they shouldn't be worth million times more than other normal flawed people.
He’s earned almost all his money through owning part of a company that millions of shareholders think is worth trillions, and does in fact generate a lot of profits.
A committee didn’t decide Zuckerberg is paid $30bn.
And id say his work is pretty exceptional. If it wasn’t then his company wouldn’t be growing. And he’d probably be pressured into resigning as CEO
Yes, I do know all of that semantics, thanks for stating the obvious.
Being rewarded for creating a privacy destroying advertising empire, exceptional work. Imagine a world where the incentives were a bit different, we might have seen other kind of work rewarded instead of social media and ads.
Well that’s the incompetent piece. Setting out to write giant historical employment contracts without a plan is not something competent people do. And seemingly it’s not that they over extended a bit either since reports claimed the time availability of the contracts was extremely limited; under 30min in some cases.
Perhaps it was this: Lets hit the market fast, scoop up all the talent we can before anybody can react, then stop.
I don't think there is anybody that would expect they would 'continue' offering 250million packages. They would need to stop eventually. They just did it fast, all at once, and now stopped.
That's not really how that works in the corporate/big tech world. It's not as though Meta set out and said "Ok we're going to hire exactly 150 AI engineers and that will be our team and then we'll immediately freeze our recruiting efforts".
cool fun concepts/technology fucked by the worlds most boring people who only have desire to dominate markets and attention.. god forbid anything happen slowly/gradually without it being about them
Fucked? Have you tried the latest Quest3 experience? It would be nowhere near this if it was not for Meta and other big corps.
Second, did you see the amount of fun content on the store? It's insane. People who are commenting on the Quest have obviously never even opened the app store there.
How did he run out of money so fast? Think Zuck is one of those guys who get sucked into hype cycles and no one around him will tell him so. Even investors.
Personally, I think it's both! It's a bubble, but it's also going to be something that slowly but steadily transforms the world in the next 10-20 years.
People seem very confused thinking that something can't both be valuable AND a bubble.
Just look at the internet. The dot com bubble was one of the most widely recognised bubbles in history. But would you say the internet was a fad that went away? That there was no value there?
There's zero contradiction at all in it being both.
We might see another AI winter first, is my assumption. I believe that LLMs are fundamentally the wrong approach to AGI, and that bubble is going to burst until we have a better methodology for AGI.
Unfortunately, the major players seem focused on 'getting to AGI pretention through LLM'.
Dot-com was the same way... the Internet did end up having the potential everyone thought it would, businesses just didn't handle the influx of investment well.
Yeah, it truly IS transformative for industries, no denying anymore at this point. What we have will remain even after a pop. But I think AI was special in how there were massive improvements the more compute you threw at it for years. But then we ran out of training material and suddenly things got much harder. It’s this ramping up of investments to spearhead transformative tech and suddenly someone turns off the tap that makes this so conflicted. I think.
... people said the same thing about the "metaverse" just a few years ago. "You know people are gonna live their entire lives in there! It's gonna change everything!" And 99% of people who heard that laughed and said "what are you smoking?" And I say the same thing when I hear people talk about "the AI machine god!"
I just did a phone screen with Meta, and the interviewer asked for Euclidean distance between two points; they definitely have some nerds in the building.
K closest points using Euclidean distance and a heap, is not 8th grade math, although any 8th grade math problem can be transformed into a difficult "adult" question. Sums are elementary, asking to find a window of prefix sums that add up to something is still addition, but a little more tricky
People saying it is a high school maths problem! I'd like to see you provide a general method for accurately measuring the distance between two arbitrary points in space...
I suppose the trick is to have an ipad running GPT-voice-mode off to the side, next to your monitor. Instruct it to answer every question it overhears. This way you'll ace all of the "humiliation ritual" questions.
there's a youtube channel made by a meta engineer, he said to memorize the top 75 LeetCode Meta questions and approaches. He doesn't say fluff like "recognize patterns. My interviewer was 3.88/4 GPA masters Comp Sci guy from Penn, I asked for feedback he said always be studying its useful if you want a career...
it wasn't just euclidean distance of course, it was this leetcode problem k closest points to origin https://leetcode.com/problems/k-closest-points-to-origin/des..., I thought if I needed a heap I would have to implement it myself didn't know I can use a library
its not a nearest neighbor problem that is incorrect, they expect candidates to have the heap solution on the first go, you have 10-15 minutes to answer, no time to optimize, cheaters get blacklisted, welcome to the new reality
Finding the k points closest to the origin (or any other point) is obviously the k-nearest neighbors problem. What algorithm and data structure you use does not change that.
edit: If you want to use a heap, the general solution is to define an appropriate cost function; e.g., the p-norm distance to a reference point. Use a union type with the distance (for the heap's comparisons) and the point itself.
true, I am thinking, Node and neighbors, this is a heap problem, it actually does matter what algorithm you use, I learn that the hard way today, trying to implement quickselect vs using a heap library (I didn't know you could do that) is much easier, don't make the same mistake!
The foundation, like every LeetCode problem, is a basic high school math problem, when the foundation of the problem is trigonometry, way harder than stack, arrays, linked list, bfs, dfs...
They're all bleeding money so yes it's inevitable.
It's always the same thing, uber, food delivery, escooter, &c. they bait you with cheap trials and stay cheap until the investors money run out, and once you're reliant on them they jack up the prices as high as they can.
they are just following the thiel playbook: race to monopoly position as fast as possible, then extract profits afterwards (which inevitably leads to the inevitable shitnization)
> Sam is the main one driving the hype, that's rich...
It's also funny that he's been accusing those who accept better job offers as mercenaries. It does sound like the statements try to modulate competition both in the AI race and in acquiring the talent driving it.
GPT-5 was a massive disappointment to people expecting LLMs to accelerate to the singularity. Unless Google comes out with something amazing in the next Gemini, all the people betting on AI firms owning the singularity will be rethinking their bets.
But then, he's purposely comparing it to the .com bubble - that bubble had some underlying merit. He could compare it to NFTs, the metaverse, the South Sea Company. It wouldn't make sense for him to say it's not a bubble when it's patently clear, so he picks his bubble.
Facebook, Twitter, and some others made it out of the social media bubble. Some "gig" apps survived the gig bubble. Some crypto apps survived peak crypto hype
Not everyone has to lose which he's presumably banking on
To me AI is a like phone business. A few companies (Apple,Samsung) will manage to score a homerun and the rest will be destined to offer commoditized products.
The article got me thinking that there's some sort of bottle neck that makes scaling astronomical or the value just not really there.
1. Buy up top talent from other's working in this space
2. See what they produce over say, 6mo. to a year
3. Hire a corpus of regular ICs to see what _they_ produce
4. Open source the model to see if any programmer at all can produce something novel with a pretty robust model.
Observe that nothing amazing has really come out (besides a pattern-recognizing machine that placates the user to coerce them into using more tokens for more prompts), and potentially call it on hiring for a bubble.
> Observe that nothing amazing has really come out
I wouldn't say so. The problem is rather that some actually successful applications of such AI models are not what companies like Meta want to be associated with. Think into directions like AI boyfriend/girlfriend (a very active scene, and common usage of locally hosted LLMs), or roleplaying (in a very broad sense). For such applications, it matters a lot less if in some boundary cases the LLM produces strange results.
If you want to get an impression of such scenes, google "character.ai" (roleplaying), or for AI boyfriend/girlfriend have a look at https://old.reddit.com/r/MyBoyfriendIsAI/
I genuinely believe SamA has directed GPT5 to be nerfed to speedrun the bubble. Watch, he’ll
let the smoking embers of the AI market cool and then reveal the next big advancement they are sitting on right now.
Nothing would give me a nicer feeling of schadenfreude than to see Meta, Google, and these other frothing-at-the-mouth AI hucksters take a bath on their bets.
Can we try to not turn HN into this? I come to this forum to find domain experts with interesting commentary, instead of emotionally charged low effort food fights.
Similar to my view of AI, there is a huge bubble in current AI. Current AI is nothing more than a second-hand information processing model, with inherent cognitive biases, lagging behind environmental changes, and other limitations or shortcomings.
I somewhat disagree here. Meta is a huge company with multiple products. Experimenting with AI and trying to capitalize on what's bound to be a larger user market, is a valid company angle to take.
It might not pan out, but it's worth trying from a pure business point of view.
Meta's business model is to capture attention - largely with "content" - so they can charge lots of money to sprinkle ads amongst that content.
I can see a lot of utility for Meta to get deeply involved in the unlimited custom content generating machine. They have a lot of data about what sort of content gets people to spend more time with them. They now have the ability to endlessly create exactly what it is that keeps you most engaged and looking at ads.
Frankly, content businesses that get their revenue from ads are one of the most easily monetizable ways to use the outputs of AI.
Yes, it will pollute the internet to the point of making almost all information untrustable, but think of how much money can be extracted along the way!
The whole point is novelty/authenticity/scarcity though, if you just have a machine that generates infinite infinitely cute cat videos then people will cease to be interested in cat videos. And its not like they pay content creators anyway.
It's Spain sinking their own economy by importing tons of silver.
We have services that will serve you effectively infinite cat videos, and neither cat videos or the websites that do that have ceased to be popular.
It is actually the basis for the sites that people tend to spend most of their time and attention on.
Facebook, Instagram, Reddit, TikTok all live on the users that only want to see infinite cat videos (substitute cat video for your favorite niche). Already much of the content is AI generated, and boy does it do numbers.
I am not convinced that novelty, authenticity, or scarcity matter in the business model. If they do, AI has solved novelty, has enough people fooled with authenticity, and scarcity... no one wants their cat video feed to stop.
Smart contracts sometimes fail because they are executed too literally. Fixing that needs something like judges, but automated - so AI! It will be perfect. /s
Its almost like nobody asked for the dramatic push of ai, and it was all created by billionaires trying to become even richer at the cost of people's health and the environment.
I still have yet to see it do anything useful. I've seen several very impressive "parlor tricks" which a decade ago i thought were impossible (image generation, text-parsing, arguably passing the turing-test) but I still haven't seen anybody use AI in a way that solves a real problem which doesn't already have an existing solution.
I will say that grok is a very useful research assistant for situations where you understand what you're looking at but you're at an impasse because you don't know what its name is and are therefore unable to look it up, but then it's just an incremental improvement over search-engines rather than a revolutionary new technology.
LLMs are not the way to AGI and it's becoming clearer to even the most fanatic evangelists. It's not without reason GPT-5 was only a minor incremental update. I am convinced we have reached peak LLM.
There's no way a system of statistical predictions by itself can ever develop anything close to reasoning or intelligence. I think maybe there might be some potential there if we combined LLMs with formal reasoning systems - make the LLM nothing more than a fancy human language <-> formal logic translator, but even then, that translation layer will be inherently unreliable due to the nature of LLMs.
We're finally reaching the point where it's cost-prohibitive to sweep this fact under the rug with scaling out data centers and refreshing version numbers to clear contexts.
Good call in this case specifically, but lord this is some kind of directionless leadership despite well thought out concerns over the true economic impact of LLMs and other generative AI tech.
Useful, amazing tech but only for specific niches and not as generalist application that will end and transform the world as we know it.
I find it refreshing to browse r/betteroffline these days after 2 years of being bombarded with grifting LinkedIn lunatics everywhere you look.
> Mark Zuckerberg has blocked recruitment of artificial intelligence staff at Meta, slamming the brakes on a multibillion-dollar hiring spree amid fears of an AI bubble.
> amid fears of an AI bubble
Who told the telegraph that these two things are related? Is it just another case of wishful thinking?
I feel like the giant 100 mil /1 billion salaries could have been better spent just hiring a ton of math, computer science, data science graduates and just forming an an AI skunkworks out of them.
Also throw in a ton of graduates from other fields/sciences, arts, psychology, biology, law , finance, or whatever else you can imagine to help create data and red team their fields.
Hiring people with creative writing and musical skills to give it more samples of creative writing and song writing, summarization etc
And people that are good at teaching and breaking complex problems into easier to understand chunks for different age brackets.
Their userbase is big but it's not the same as ChatGTP's, they won't get the same tasks to learn from users that chatgpt does.
Is it just me or does it feel like billionaires of that ilk can never go broke no matter how bad their decisions are?
The complete shift to the metaverse, the complete shift to LLMs and fat AI glasses, the bullheaded “let’s suck all talents out of the atmosphere” phase and now let’s freeze all hiring. In a handful of years.
And yet, billionaires will remain billionaires. As if there are no consequences for these guys.
Meanwhile I feel another bubble burst coming that will hang everyone else high and dry.
the top100 richest people on the globe can do a lot more stupid stuff and still walk away to a comfortable retirement, whereas the bottom 10-20-.. percent doesn't have this luxury.
not to mention that these rich guys are playing with the money of even richer companies with waaay too much "free cash flow"
It could be that, beyond the AI bubble, there may be a broader understanding of economic conditions that Meta likely has. Corporate spending cuts often follow such insights.
If AI really is a bubble and somehow imploded spectacularly for the rest of this year, universities would continue to spit out AI specialists for years to come. Mr. Z. will keep hiring them into every opening that comes up whether he wants to or not.
Silicon Valley has never seen a true bubble burst, even the legendary dot-com bubble was a minor setback from which the industry was fully recovered in about 5-10 years.
I have been saying for at least 15 years now that eventually Silly Valley will collapse when all these VCs stop funding dumb startups by the hundreds in search of the elusive "unicorns", but I've been wrong at every turn as it seems that no matter how much money they waste on dumb bullshit the so-called unicorns actually do generate enough revenue to make funding dumb startup ideas a profitable business model....
Note: I was too young to fully understand the dot com bubble, but I still remember a few things.
The difference I see is that, conversely to websites like pets.com, AI gave the masses something tangible and transformative with the promise it could get even better. Along with these promises, CEOs also hinted at a transformative impact "comparable to Electricity or the internet itself".
Given the pace of innovation in the last few years I guess a lot of people became firm believers and once you have zealots it takes time for them to change their mind. And these people surely influence the public into thinking that we are not, in fact, in a bubble.
Additionally, the companies that went bust in early 2000s never had such lofty goals/promises to match their lofty market valuations and in lieu of that current high market valuations/investments are somewhat flying under the radar.
> The difference I see is that, conversely to websites like pets.com, AI gave the masses something tangible and transformative with the promise it could get even better.
The promise is being offered, that's for sure. The product will never get there, LLMs by design will simply never be intelligent.
They seem to have been banking on the assumption that human intelligence truly is nothing more than predicting the next word based on what was just said/thought. That assumption sounds wrong on the face of it and they seem to be proving it wrong with LLMs.
However, even friends/colleagues that like me are in the AI field (I am more into the "ML" side of things) always mention that while it is true that predicting the next token is a poor approximation of intelligence, emergent behaviors can't be discounted. I don't know enough to have an opinion on that, but for sure it keeps people/companies buying GPUs.
> but for sure it keeps people/companies buying GPUs.
That's a tricky metric to use as an indicator though. Companies, and more importantly their investors, are pouring mountains of cash in the industry based on the hope of what AI may be in the future rather than what it is today. There are multiple incentives that could drive the market for GPUs, only a portion of those have to do with today's LLM outputs.
It was an example. Pets.com was just the flagship (at least in my mind), but during the dot com bubble there were many many more such sites that had an inflated market value. I mean, if it was just one site that crashed then it wouldn't be called a bubble.
From the Big Short:
Lawrence Fields: "Actually, no one can see a bubble. That's what makes it a bubble."
Michael Burry: "That's dumb, Lawrence. There are always markers."
Ah Michael Burry, the man who has predicted 18 of our last 2 bubbles. Classic broken clock being right, and in a way, perfectly validates the "no one can see a bubble" claim!
If Burry could actually see a bubble/crash, he wouldn't be wrong about them 95%+ of the time... (He actually missed the covid crash as well, which is pretty shocking considering his reputation and claims!)
Ultimately, hindsight is 20/20 and understanding whether or not "the markers" will lead to a major economic event or not is impossible, just like timing the market and picking stocks. At scale, it's impossible.
I feel 18 out of 2 isn't a good enough statistic to say he is "just right twice a day".
What was the cost of the 16 missed predictions? Presumably he is up over all!
Also doesn't even tell us his false positive rate. If, just for example, there were 1 million opportunities for him to call a bubble, and he called 18 and then there were only 2, this makes him look much better at predicting bubbles.
If you think that predicting economic crash every single year since 2012 and being wrong (Except for 2020, when he did not predict crash and there was one), is good data, by all means, continue to trust the Boy Who Cried Crash.
This sets up the other quote from the movie:
Michael Burry: “I may be early but I’m not wrong”. Investor guy: “It’s the same thing! It's the same thing, Mike!”
While I think LLMs are not the pathway to AGI, this bubble narrative appears to be a concerted propaganda campaign intended to get people to sell, and it all started with Altman, the guy who was responsible for always pumping up the bubble. I don't know who else is behind this, but the Telegraph appears to be a major outlet of these stories. Just today alone:
Other media outlets are also making a massive push of this narrative. If they get their way, they may actually cause a massive selloff, letting everyone who profited from the giant bubble they created buy everything up cheap.
If there is a path to AGI then ROI is going to be enormous literally regardless of how much was invested. hopefully this is another bubble. i would really rather not have my lifes work vaporized by the singularitt
I think I have said it before here (and in real life too) that AI is just another bubble, let alone AGI which is a complete joke, and all I got is angry faces and responses. Tech always had bubbles and early adopters get the biggest slice, and try as much as possible to keep it alive later to maximize that cut. By the time the average person is aware of it and is talking about it, it's over already. Previous tech bubbles: internet, search engines, content makers, smartphones, cybersecurity, blockchain and crypto, and now generative AI. By the way, AI was never new and anyone in the field knows this. ML was already part of some tech before generative AI kicked in.
Glad I personally never jumped on the hype and still focused on what I think is the big thing, but until I get enough funds to be the first in the market, I will keep it low.
I don’t think it’s entirely a bubble. Definitely this is revolutionary technology on the scale of going to the moon. It will fundamentally change humanity.
But while the technology is revolutionary the ideas and capability behind building these things aren’t that complicated.
Paying a guy millions doesn’t mean shit. So what mark zuckerberg was doing was dumb.
Of all the examples of things that actually had an impact I would pick this one last... Steam engine, internet, personal computers, radios, GPS, &c. but going to the moon ? The thing we did a few times and stopped doing once we won the ussr vs usa dick contest ?
Impact is irrelevant. We aren’t sure about the impact of AI yet. But the technology is revolutionary. Thus for the example I picked something thats revolutionary but the impact is not as clear.
The most likely explanation I can think of are drugs.
Offering 1B dollar salaries and then backtracking, it's like when that addict friend calls you with a super cool idea at 11pm and then 5 days later they regret it.
Also rejecting a 1B salary? Drugs, it isn't unheard of in Silicon Valley.
https://archive.ph/UqHo8
These changes in direction (spending billions, freezing hiring) over just a few months show that these people are as clueless as to what's going to happen with AI, as everyone else. They just have the billions and therefore dictate where money goes, but that's it.
This is a structural problem with our economy, much larger than just Facebook. Due its large scale concentration, the allocation of capital in the economy as a whole has become far less efficient over the last 20 years.
But always remember, it's not technically a monopoly!
Boy am I tired of that one. We desperately need more smaller companies and actual competition but nobody seems to even be trying
Many of us in the antitrust/competition law community are trying. One issue, specific to digital markets, is that the field has very few people who are both legally and technically literate. If you're a technical person looking for a career shift, moving into legal policy/academia has the potential to be quite high impact for that reason.
Gods I would love to work more in a policy space, tho my background is entirely technical.
A friend of mine has been trying to get into law school for a few years; she's technically competent and plenty intelligent, but it's been hard going for her to get in, plus multiple years of education to even attempt the bar. All of that sounds like far too much sunk-cost to me to dally in and figure out if it's a path I would truly enjoy.
What ways could I engage with policy coming from a technical background that would serve as a useful stepping stone to a more policy based career, but doesn't require such an upfront cost as a law degree?
I guess it depends on your circumstances. In Europe, for instance, the cost of a degree is sometimes quite low. My gateway from tech to law was a part-time masters degree in political science, and which cost around 200 euros a semester (in Germany). That degree gave me enough experience to then apply for a PhD in law.
Which brings me to the next point. Doing a law degree and passing the bar is perhaps the obvious path to doing policy things. It’s basically the only way that you can end up actively participating in courts, for example. But there are many other options! For myself, the plan is to stay in academia and not take any bar courses (then again, who knows what will happen!). Academics have lots of potential to shift policy, especially as neutral agents who aren’t paid by either side of particular debates. Our papers are read by policymakers and judges, who often don’t have the time or resources to think deeply about particularly gnarly topics. But there are lots of other options which could also work, and I guess finding a "niche" would depend on your specific circumstances, connections and skillset.
If you’re looking to spend more time thinking about policy issues, I’d start by simply sleuthing online. Bruce Schneier, for example, regularly writes excellent pieces at the intersection of technology and policy, which are very well hyperlinked to other high quality stuff. These kinds of blogs are a great way to get into the space, as well as to learn about opportunities which are coming up. Reading journal articles that sound interesting is a good option too (and US law journal articles are often quite accessible). There are also spaces offline, such as conferences which encourage both law and tech people (there’s one happening in Brussels soon [1]), or even institutions set up specifically to operate in this space and which have in-person events (Newspeak House comes to mind [2]).
[1] https://www.article19.org/digital-markets-act-enforcement/ [2] https://newspeak.house
Call up TechCongress and offer to volunteer for a cycle.
Law school is the same as med school: if you can’t see yourself living life as something that requires a JD, skip it. Just do the thing you want to do; unless that’s “dispense legal advoce to paying clients and represent them in legal disputes” you can probably do it legally without a JD.
Also be aware you are a lawyer when you graduate law school and you don’t have to pass the bar unless that’s a requirement for your practice. For example, a general counsel of an internet startup might not have to be a member of the bar, but someone going into trial court to represent clients does. I would think you could be a staffer for a congressperson with a JD and without bar membership prettt easily.
Once, out of curiosity, I looked into how easily someone without a formal law degree and work experience could take the bar exam "for fun", and IIRC in my state it wasn't really possible.
is there a state bar bootcamp?
Outside of going back to school for another degree, how would this shift be possible?
The allocation of capital is not even close to a monopoly. There are plenty of VC firms looking to fund almost any idea.
The point of VC, specifically, is to grow software monopolies - but it's very easy to pick up VC funding if you happen to live in the Bay Area.
World would benefit greatly if EU went ahead with tech tax. It's crazy how much IT companies get away with that would be the end of any other business.
This is why I ignore anything that CEOs say about AI in the news. Examples, AGI in a few years, most jobs will be obsolete, etc.
CEO's were never credible to begin with, but what about Nobel laureate in Physics, Geoffrey Hinton, telling us to stop training radiologists? Nothing makes sense anymore.
Well, clearly he should stick to physics. Even if it's likely that AI would replace them soon, a lot of people would likely die unnecessarily if we ran overly low on radiologists by ending the pipeline too soon. They're already overworked and that can only go so far. It's not a bet we should make.
> Well, clearly he should stick to physics.
While his Nobel prize was for "physics", his domain is AI.
Geoffrey Hinton believes a little too much in his baby.
The media always says AI is the biggest technological change of our lifetime.. I think it was the internet actually.
I believe it's the biggest change since the Internet but what will be bigger will probably remain subjective.
I’d say that the social media revolution had a bigger effect than AI has at this point.
in the context of LLM's we are in the Friendster era
I guess we will know when we are in the Facebook era when my parents start using it.
That is a genuinely fair take. I agree
Social media was a quality of life upgrade where it wasn't promising too much, and it delivered on what it promised. (maybe a little too much)
AI on the other hand just like blockchain feels like hype.
Social Media is basically what enabled me to actually be social during the Friendster/Early Myspace era. It helped me get to know people I'd met in real life, and meet other people within the city I lived in.
Now if you're not on linkedIn, people question whether you are a real person or not.
I hope AI ends up like blockchain. It's there if you have a use-case for it, but it's not absolutely embedded in everything you do. Both are insanely cool technologies.
The media was saying nfts are a reasonable investment and web3 is the future, so I am not sure if they have any remaining credibility.
We are at the awesome moment in history when the AI bubble is popping so I am looking forward to a lot of journalists eating their words (not that anybody is keeping track but they are wrong most of the time) and a lot of LLM companies going under and the domino crash of the stocks of Meta, OpenAI to AWS, Google and Microsoft to Softbank (the same guys giving money to Adam Neumann from WeWork).
Can I ask why you would be looking forward to the stock market crashing? Vindication?
I personally have just 20% of my net worth in stocks, as they seem very expensive right now. A crash would allow me to increase my allocation at reasonable prices.
Who benefits from high stock prices?
Certainly not people regularly buying stocks or stock ETFs/Funds.
I suppose if you’re operating on the assumption that tech stocks are vastly overinflated then this makes sense. Otherwise I would expect the people that are regularly buying these securities would be happy that they’re increasing in value, no?
The ponzu scheme of SPY is great until it stops. 10% of America’s payroll gets lumped into it each month and generational wisdom is you get a 10% ROI despite the economy growing 2%.
At some point that will collapse, and it won’t be pretty.
> The ponzu scheme of SPY is great until it stops
TINA (there is no alternative).
Inflation will eat your cash.
Bonds hardly generate (real) returns unless you want to take big risks with duration.
Real estate is over inflated.
Gold is speculative.
Crypto is...not real.
What's left?
Valuation of companies tied to their real current profits! If a company is unprofitable now, it doesn't make sense and is wholly wrong that its stock is trading 1000x more than other companies which actually turn a profit.
The difference from public ownership to public gambling is huge in its impact to society, especially when the markets crashes.
So rather that someone auto-investing a slice of their paycheck into a s&p fund in their 401k, they should instead learn how to evaluate company financials so they can pick winners from a non tax advantaged account?
This is a losing strategy for the large majority, and it's been demonstrated repeatedly that even professional investors can't beat the market especially after considering fees.
https://www.investopedia.com/articles/investing/030916/buffe...
No. The 'goal' of investing (e.g. regularly buying) means attempting to own as many shares as possible. That is achieved by buying low and selling high. Buyers benefit from lower prices, not higher.
So many investors get this concept wrong. I suppose they get excited because what they bought went up in value and they have a sense of being enriched. But, that is backwards. That is what they want 20-40 years from now when it will almost certainly be the case that prices are not just higher, but much higher, than today. But, when they are buying shares, the goal is to pay the lowest price possible. If I am 20 years old, I am screaming: crash and burn baby! Crash and burn! Gimme those shares at 50% off yesterday's price.
> I am screaming: crash and burn baby! Crash and burn! Gimme those shares at 50% off yesterday's price.
Sure, but once you reach the point where you have a lot of money in the market you probably won't enjoy watching 50% of it disappear, even if it means your next auto investment is for a nice bargain price.
Also, when the stock market crashes usually bad things accompany it. Like a depressed economy and job losses.
> Also, when the stock market crashes usually bad things accompany it. Like a depressed economy and job losses.
It's our own fault for tying the stock market performance to our economy's performance. Why would I, a train worker, should have my pension affected by Sam a Altman's bad decision making or by Enron's lies and deception.
It's our own fault that the stock market is so volatile and that we tie so much of our economy to a financial gambling machine that's become increasingly divorced from reality in the last couple of decades. Like you are putting money on a stock that trades at 1000 on a company that is 10 years away from being profitable? You deserve your money to go poof.
> Like you are putting money on a stock that trades at 1000 on a company that is 10 years away from being profitable? You deserve your money to go poof.
Who is suggesting that?
NVDA trades at 57x earnings, MSFT 37, GOOG 22. The article is about META and they are 27x. These are the big companies that dominate the s&p that we're talking about.
I don't think anyone is suggesting to put their life savings into Anthropic. They can't anyway, it's not public.
The s&p PE is 30, which is high, but still lower than it was in 2020 before the AI "bubble" started.
With stock prices divorced from reality, the ones who benefit are the having the funds to buy in volume, the gamblers, and the ones hyping the stocks and creating the illusion of profitability and growth. Years ago it would have been unthinkable to have so many unprofitable companies with unclear path to profitablility having such a high valuation, but we have normalized frenzied gambling as a society.
The current absolute balloon of a market is about to pop, and sadly, the people who hyped the stocks are also the ones knowing when to jump ship, while the hapless schmucks who believed the hype will most likely lose their money, along with a lot of folks whose retirement investment funds either didn't due their diligence or were outright greedy.
In a way, as a society we deserve this upcoming crash, because we allow charlatans and con people like Musk, Zuck and Sam to sell us snake oil.
That's one interpretation, but nobody really knows. It's also possible that they got a bunch of big egos in a room and decided they didn't need any more until they figured out how to organize things.
you think folks that have experience managing this much money/resources (unlike yourself) are clueless? more likely it's 4D chess.
Just like the metaverse?
sometimes you lose
When your track record includes a lot of losses, the scale points more and more toward "clueless" and away from "4D chess".
Especially when there are hours of public footage of the decision maker in question not sounding particular astute as a technical or strategic thinker.
How does zuck’s net win-loss track record look to you?
Isn’t that already priced in? Shorts exist.
The market can stay irrational for longer than you can stay solvent.
That was my point, if someone thinks that Meta is overvalued, they can put their money where their mouth is. The fact that the share price hasn’t cratered is a kind of collective belief in the opposite.
Edgy prediction: Meta is irrelevant and on a path to even further irrelevancy, and, fingers crossed, a bankruptcy or at least Zuck being removed as the main man
"The Zuck" can't be removed — he has more the 60% of the votes![0] The board stupidly voted to do so a few years ago.
[0]: He only has about 13% of the shares, but the dual allocation means that his class B shares are worth 10 votes. And he owns 99% of those shares. https://observer.com/2023/06/mark-zuckerberg-2023-shareholde...
Edgier prediction: Meta is too (big, but more importantly) relevant to (be allowed to) fail, because it can be co-opted by TLAs due to its apps being pre-loaded on mobile devices.
https://news.ycombinator.com/context?id=44979751
There were a lot of things too big to fail
Lehman brothers, Enron.
When this bubble pops, its going to be absolute chaos maybe just like last time
It will be chaos for those working at Meta and those invested in them too much without an appropriate hedge. I doubt I will notice much, to be honest.
Even if Meta tanked, unless Messenger/Whatsapp stop working, it’s kind of beside the point how much their stock trades for. Everyone will just use whatever has or keeps the most public interest, whether that is Meta-owned or something else.
The worrying aspect is that for Meta to really tank in value, the shit has to have already hit the fan, and it probably would not be isolated to Meta.
My point in my prior comment was that Meta serves the purposes of the IC status quo just by doing what they’re already doing. Cloudflare too, in a way.
I meant that if meta actually goes under, since they are clearly a decent part of S&P 500, it might create a spiraling effect similar to lehman brothers which can affect the world economy and thus you and me both.
They better get their shit sorted.
> Meta serves the purposes of the IC status quo
The problem is that their products are getting worse and worse. Signal is already taking a huge share from WhatsApp (ads and AI chat bots, really?) and Messenger.
TikTok absolutely obliterated Instagram. Facebook is sliding into irrelevancy, and most importantly, they have a lot of failed products like Oculus, Metaverse (wtf is it anyway), LLAMa, etc. Now they are sliding into even more irrelevance and burning money even faster trying to poach extremely expensive OpenAI folks. My conspiracy theory is that Facebook ads earning numbers are somehow a scam.
After so many bad decisions on their part, so much waste and bad execution that I can't see them surviving the next 5 years.
They can just buy other companies for 5 years and coast, or they would have done if not for antitrust concerns under Biden. They can afford to pay hundreds of millions of dollars to rockstars for well over 5 years, and acquihire their way to acquiring the next big thing. I think they’re probably appropriately valued alongside other trillion dollar companies, but they will likely find that it’s less lonely at the top than anticipated.
Signal serves IC interests too by requiring phone numbers.
> They can just buy other companies for 5 years and coast
No, what they could do in the past is not at all how they can operate today. They can't afford to pay the rockstars anymore, they went through multiple rounds of layoffs. They can't afford to drop the stock too low also. Basically they are in a corner, and I love it. Fingers crossed that within the next five years they shake up upper management and Zuck is out.
They’re laying off folks, but that doesn’t mean they’re doing it due to payroll pressure. They’re a megacorp. They can’t go broke from payroll, they have accountants for that. The folks getting acquihired and poached aren’t at risk of being laid off as long as they’re producing value. If the value they bring at Meta is also not being provided elsewhere, so much the better for Meta. It hurts Meta’s competitors more than it hurts Meta, because Meta won’t miss the money. They don’t need high headcount, they need folks who are irreplaceable. It’s a different hiring process for different jobs.
I doubt Zuck is out anytime soon, unless folks stop using their products compared to alternatives. I think it’s possible, but I think the odds are at best even for him to go in 5 years. In 10 years, who can say? Facebook users are pretty locked in because there’s nothing else like it for the users that regularly use it. Facebook users who aren’t on alternatives aren’t just going to switch to Reddit or TikTok overnight. Why would they? I can’t follow your reasoning, but I understand not being a fan of Zuck or Meta, I guess, but I think their business seems pretty strong right now, though that is subject to change along with consumer whims.
I can't/won't short the stock because shorting usually is for 14 days etc. and I can't be certain in that timeframe about meta or any company
I mean, theoretically you could short a company for a really long time it seems like, I just searched, I always assumed it to usually be of 14 days but still.
Isn’t that the reason options even exist? You need to know you won’t starve and die if your harvest doesn’t come in. I’ll admit that I’m no expert on finance and innovations in financial instruments, but I think short selling has been around in some form or another for centuries.
https://en.wikipedia.org/wiki/Short_(finance)
> The practice of short selling was likely invented in 1609 by Dutch businessman Isaac Le Maire, a sizeable shareholder of the Dutch East India Company (Vereenigde Oostindische Compagnie or VOC in Dutch).
Doesn't sound like "sometimes" in most cases
Like everything but acquisitions since the original product?
And like VR?
And like...LLAMa?
Maybe also like adding ads in WhatsApp cause we gotta squeeze our users so we can spend on... AI gurus?
Meta has not had a win since they named themselves Meta. It's enjoyable to watch them flail around like clueless morons jumping on every fad and wasting their time and money.
It makes me feel better / more comfortable with myself seeing what meta does. Almost being childish.
Maybe this sounds selfish but its a little fun to me to see them lose. I just don't like meta and its privacy in sensitive ad network bullshit.
Like the fact that if someone clicked a photo and deleted it then show them beauty ads because they are insecure. I can't give 2 cents about the growth of such a black mirror -esque company
> I can't give 2 cents about the growth of such a black mirror -esque company
I would donate my two cents or even more to witness their downfall though. I left WhatsApp years ago, and haven't used any of their other services like fb or Instagram. I don't want to contribute to a company that actively helped a couple of genocides (Myanmar), help elect a dictator or two (Philippines) and spread racist propaganda and, most recently, allowing women to be called 'personal objects'.
Their tech is far from impressive, their products are far from impressive, the only impressive thing is that they are still in business.
Anti-aging startups is 5D chess, the 4th dimension is the most fickle so it's very hard to make it to a 4D intercept when your ideas are stupid.
I think the use of "4d chess" was your downfall.
I do however think that this is a business choice that at the very least was likely extensively discussed.
Yes, yes I do. How much practical experience does someone with billions of dollars have with the average person, the average employee, the average job, and the kind of skills and desires that normal people possess? How much does today's society and culture and technology resemble society even just 15 years ago? Being a billionaire allows them to put themselves into their own social and cultural bubble surrounded by sycophants.
META has only made $78.7 billion operating income in the past 12 months of returns. Time to buckle up!
https://finance.yahoo.com/quote/META/financials/
It's really difficult to wrap one's head around the cash they're able to deploy.
They “deploy” much more than what they generate.
Their cash position has gone from $44bn to $12bn in the first six months of the year and are now getting other people to pay for datacenters https://www.reuters.com/business/meta-taps-pimco-blue-owl-29...
I take offense to your implication this is an incorrect usage of the word deploy.
As opposed to employ? :p
I really hope they told the Louisiana regulators this in the meeting yesterday because the argument was something along the lines of “Meta is worth $2T”
Ouch. Other FAANG in a similar position?
I guess there's no 'M' in "FAANG" but there's this:
https://www.geekwire.com/2025/im-good-for-my-80-billion-what...
FAANG has been replaced by Mag7: Alphabet, Amazon, Apple, Broadcom, Meta, Microsoft, and Nvidia.
Can we switch to BANAMMA, Ba-nam-ma
* Broadcom * Alphabet * Nvidia * Amazon * Meta * Microsoft * Apple
Why not bloomberg instead lol
Does Broadcom do anything but get hate for their shitty decisions? They are becoming, if they aren't already, the new Oracle.
Lol get out of the echo chamber
Edit: to make this helpful, look at Broadcomm interconnect, switching technology, copackaged optics
At the current price of $107,586 per kilo of gold, that is 731,507 kilos of gold per year. A rail box car has a load limit of 92,500 kilos. Eight full box cars, or 16 half full box cars of gold currently represents the annual output of META.
23:1 P/E. Not Tesla levels of stupidity but still high for a mature company.
An astonishing number
385 comments based on a clickbait headline from telegraph (you know that sophisticated tech focused newspaper...)
how does this compare to the depreciation cost of their datacenters?
The financials from the link to not specifically call out Depreciation Expense. But Operating Income should take into account Depreciation Expense.
The financials have a line below Net Income Line called "Reconciled depreciation" with about $16.7 billion. I do not know what that means (maybe this is how they get to the EBITDA metric) but maybe this is the metric you are looking for.
Most of the operating expenses seem to be in the $13 billion "R&D" spend on the Q2 2025 statement.
https://pbs.twimg.com/media/GxIeCe7bkAEwXju?format=jpg&name=...
Zuckerberg either doesn't have the resolve for changing the business, or just keeps picking the wrong directions (depending on your biases).
First Facebook tried to pivot into mobile, pushed really hard for a short time and then flopped. Then Facebook tried really hard to make the Metaverse a thing, and for a while, but eventually Meta stopped finding it interesting and significantly reduced investment. Then AI was the big thing and Meta put a huge amount of money into it, chasing after other companies, with an arguably novel approach compared to the rest of big tech... but now seems to be backing out or at least messaging less commitment. Oh and I think there was some crypto in there too at one point?
I'm not saying that they should have stuck with any of these. The business may not have worked in each case, and that's fine, but spending billions on each one seems like a bad idea. Zuckerberg is great at chasing the next big thing, but seemingly bad at landing the next big thing. He either needs to chase them more tentatively, investing far less, or he needs to stick with them long enough to work out all the issues and build the growth over the long term.
For the past 15 years, mobile has been the main revenue source for Facebook. As big as Facebook is, they're at the mercy of the 2 competitors: Apple and Google. Apple has been very hostile to Facebook, because Facebook make a shitload of money off Apple's platform and they refused to pay a certain percentage to Apple - unlike Google who is paying 20B a year to access iOS users. Apple tried to cut Facebook off with ATT on iOS 14, but it didn't work.
Because of this, Zuckerberg has to be incredibly paranoid about controlling his company destiny, to stop relying on others' platforms to deliver ads. It would be catastrophic for Facebook to not be a main player for the next computing platform, and they're currently making a lot of money from their other businesses. Zuckerberg is ruthless and he is paranoid, he has total control of Facebook and he will use all the resources to control the next big thing. I think it comes down to this: Zuckerberg believes it's cheaper to be wrong than to miss out on the next platform, and Facebook can afford to be wrong (to a certain extend).
> For the past 15 years, mobile has been the main revenue source for Facebook. As big as Facebook is, they're at the mercy of the 2 competitors
Before mobile was this big, Facebook tried their own platform and bottled it. This was during the period that the market was still diverse, with Windows phones, Blackberries, etc.
They also tried to make mobile web a thing for a few years past when it was obvious that native apps were the way forward.
Facebook certainly did not have the resources and experiences to make a mobile OS at that point. Microsoft tried and failed, there was no space for a 3rd mobile OS.
> They also tried to make mobile web a thing for a few years past when it was obvious that native apps were the way forward.
This was one of the first friction Facebook encountered with Apple. They wanted to make their own store in the Facebook app on iOS, but obviously Apple said no. Maybe doing Facebook app in HTML5 was a way to protest against the way Apple was moving things forward, but again it didn't work, their app was crap and they rewrote everything in native.
Microsoft tried and then pulled a google i.e. they randomly gave up on something people were actively using.
Meta Quest Store charges the same cut as Apple with stricter control in many ways.
Don't forget gaming back in the day! Facebook games started taking off, then Facebook decided that the _only_ way you could get paid on the Facebook platform was with Facebook Credits, and to incentivize Facebook as the gaming platform of choice, Facebook would give out free Credits to players to spend on Facebook games. Of course, if your game was the one they chose to spend those Credits on, you wouldn't actually get paid, not with promotional credits, what, are you crazy?
No, I'm not still bitter from that era, why do you ask?
Cory Doctorow has a compelling theory that the megatech companies have to appear to be startups, or else their share price reverts to normal multiples. Hence the continuous string of increasingly over-hyped "game-changing technologies" they all (not just Meta) keep rolling out.
VR, blockchain and LLMs have their value, but it's a tiny fraction of the insane amounts of money being pumped into these bubbles. There will be tears before bedtime.
Indeed, for big valley tech companies it's crucial to have a new business developing in the wings which has plausible potential to be the "next big thing." They're desperate to keep their stock price from being evaluated solely on trailing twelve month revenue, so having a shiny, ephemeral hype-magnet to attract inflated growth expectations is essential.
So far, it appears the psychology of investors allows the new thing to fail to deliver big revenue and be tacitly dropped - as long as there's a new new thing to replace it as the aspirational vehicle. Like any good mark in a con game, tech investors want to believe.
> as long as there's a new new thing to replace it as the aspirational vehicle. Like any good mark in a con game, tech investors want to believe.
Yea, but it seems like the new new thing needs to get progressively bigger with each cycle, which is why I think the shell game is almost over.
They really can't overpromise much more than they did with the AI hype-cycle.
It feels like a startup valuation in that having a down round is...not favored by investors; I feel like having a step-down in promises would also be problematic.
> They really can't overpromise much more than they did with the AI hype-cycle.
While I agree that "replace all human labor" is already pretty high up there on the overreaching u/dis-topian promise list, there are still a few things left.
Perhaps the next dream to sell will be digitizing the minds of Valued Shareholders so that they can grasp immortality inside the computer.
Yup which is why I think that the bubble is going to burst but what's surprising to me is that a lot of normal folks might be hurted by this too in the sense that s&p 500 has really concentrated its holding into companies believing into AI and the hype train seems to be coming to an end and the bubble coming near its burst.
Or every investor just expects the other investors will fall for this, but the result is the same: number go up so buy more. It could be no one really falls for it at all.
The economy could be doing really bad but the stock market can be doing good, they aren't direct correlation imo anymore.
It can take a long time for the stock market to actually be corrected but I know one thing and that is, bottom it would be corrected some day and maybe they would call it the bursting of a bubble
> Cory Doctorow has a compelling theory that the megatech companies have to appear to be startups, or else their share price reverts to normal multiples.
Meta's P/E is about the same as S&P 500.
This may well be true, but my point is more that Facebook/Meta/Zuckerberg seem almost uniquely unable to turn the startups into great new businesses, when compared with the other big tech companies.
Amazon added cloud and prime, Microsoft added cloud, xbox, 365, Google added Chrome, Android, cloud, Youtube, consumer subscriptions, workspace, etc. Netflix added streaming and their own content, Apple added mobile, wearables, subscriptions.
Meta though, they've got an abandoned phone platform from years ago, a half-baked Metaverse that is being defunded, a small hardware business for the Quest, a pro VR headset that got defunded, a crypto business that got deprioritised, and an LLM that's expensive relative to open competitors and underperforms relative to closed competitors... which the tide appears to be turning on as the AI bubble reaches popping point.
> Facebook/Meta/Zuckerberg seem almost uniquely unable to turn the startups into great new businesses, when compared with the other big tech companies.
Really? Instagram, WhatsApp... the two most used apps & services in the world?
> Google added Chrome, Android, cloud, Youtube,
It's arguable how GCP is profitable, but chrome/android/yt are money-losing businesses if you exclude ad revenues.
They're money losing business if you exclude their revenue source? Thats a weird take, you could say the same thing facebook itself ...
Maybe he can work on making Facebook not be such a piece of shit. I feel like he got his one lucky break and should just give up on trying to make more money. He already has billions. Is he proud of Facebook as a product? Because as a user it feels sluggish, buggy, inconsistent, and just full of low quality trash. I would be embarrassed if I was him.
That really seems to be the defining characteristic of the 21st century elite: they’re shameless and proud of it.
Only 21st century? Have you read any history at all?
Metaverse was a flop maybe, but meta makes something like $1 billion a week from its mobile apps, it'd be crazy to say that is not successful.
The fact that it was so successful, and that zuck picked mobile to be the next big thing before many of his peers and against what managers in the company wanted to do is probably what has made him now overconfident that he can do it again
By "pivot into mobile" I suspect the other poster is referring to Facebook Home, an ill-fated Android skin and line of smartphones.
https://en.wikipedia.org/wiki/Facebook_Home
No. Back when smartphones were still in the process of taking over the market, Zuck saw the adoption curve and realized that future ad revenue would be from phone scrollers.
At the time most features were designed and implemented first for desktop and later ported to mobile. He issued an edict to all hands: design and build for mobile first. Not at some point in the future but for everything, starting immediately.
Maybe this doesn't sound major, but for the company it was a turn on a dime, and the pivot was both well informed and highly successful in practice.
Call me classic and old school. But I call a company succuessfull if it actually does make more money than it spends. Everything else is just driving dept and economy but no actual success
[dead]
>Then Facebook tried really hard to make the Metaverse a thing, and for a while, but eventually Meta stopped finding it interesting and significantly reduced investment.
That's a charitable description of a massive bonfire of cash and credibility for an end product that looks worse than a 1990s MMORPG and has fewer active users than a small town sports arena.
Compared to other recent bubbles (crypto, nfts, and ai), its practically quaint and lovable by comparison. About the only person it hurt is mark Zuckerberg and the marketing grifters that tried to start companies around it.
> Then Facebook tried really hard to make the Metaverse a thing...
An unforced error on the scale of HBO switching to MAX, except likely far more expensive. What is the Metaverse anyway?
man hbo's name changes are just so funny. I just searched, 4 name changes are wild.
It's the future!
The same as Zuck's bet on VR (remember Oculus?).
Similar to Zuck's promises of superintelligence.
Just one of the many futures wherein Meta poured a lot of money and achieved nothing.
I hope in their real future there is bankruptcy and ruin.
Short sighted. I wouldn't count VR out until the fidelity is good enough and cheap enough that everyone who has a smartphone can experience VR/AR.
If it still doesn't take off, fair.
But I bet the form factor will be glasses because now you can have a screen way bigger than a phone or monitor and the interface is way smarter (ai).
It's just a matter of when everyone can afford one
> Short sighted
If we wait 20 years for VR to take off and it's not Meta who benefits then, it's them who are short sighted to have started so early on that bandwagon.
Besides, waiting for something to materialize before being able to declare that it is stupid is a cop out. What, are we waiting for NFTs to become useful? They are stupid now. VR is stupid and unsuccessful now. I ain't waiting to be able to declare that Meta screwed up both in VR and in the Metaverse whatever the Metaverse is.
It's important to analyze decisions within the context at the time, not the modern context.
When Facebook went into gaming, it was about the time they went public and they were in search of revenue. At the time, FB games were huge. It was the era of Farmville. Some thought that FB and Zynga would be the new Intel and MIcrosoft. This was also long before mobile gaming was really big so gaming wasn't an unreasonable bet.
Waht really killed FB Gaming was not having a mobile platform. They tried. But they failed. We could live in a very different world if FB partnered with Google (who had Android) but both saw each other as an existential threat.
After this, Zuckerberg paid $1 billion for Instagram. This was a 100x decision, much like Google buying Youtube.
But in the last 5-10 years the company has seemed directionless. FB itself has fallen out of favor. Tiktok came out of nowhere and has really eaten FB's lunch.
The Metaverse was the biggest L. Tends of billions of dollars got thrown at this before any product market fit was found. VR has always been a solution looking for a problem. Companies have focused on how it can benefit them but consumers just don't want headsets strapped to their heads. It's never grown beyond a niche and never shown signs that it would.
This was so disastrous that the company lost like 60%+ of its value and seemingly it's been abandoned now.
Meta also dabbled with cryptocurrencies and NFTs. Also abandoned.
Social media really seems to have settled into a means of following public figures. Individuals generally seem to interact with each other via group texts.
Meta has a massive corpus of posts, comments, interactions, etc to train AI. But what does Meta do with AI? Can they build a moat? It's never been clear to me what the end goal is.
> Meta has a massive corpus of posts, comments, interactions, etc to train AI
I question whether the corpus is of particularly high quality and therefore valuable source data to train on.
On the one hand: 20+ years of posts. In hundreds of languages (very useful to counteract the extreme English-centricity of most AI today).
On the other hand: 15+ years of those posts are clustered on a tiny number of topics, like politics and selling marketplace items. Not very useful unless you are building RagebaitAI I suppose. Reddit's data would seem to be far more valuable on that basis.
> Social media really seems to have settled into a means of following public figures. Individuals generally seem to interact with each other via group texts.
I wish Google circles were still a thing.
You're right about Instagram, I did forget them. That was a huge get. Was that skill or luck though?
Didn't they just buy it though? Then they proceeded to fuck it up just like they did to Facebook.
Depends on your interpretation. Maybe? I think there's a fair case that Instagram wouldn't be what it is today if it wasn't bought by Facebook.
You could also level a similar question at Google about YouTube. I believe YouTube is one of Google's great successes (bias: I work at Google), and that it wouldn't have become what it is now outside of Google, but I think it would be hypocritical of me to not accept the same about Instagram.
He, as many other billionaires, confused luck for skill. Just because they were at the right time in the right place to launch something, doesn't mean their other ideas are solid or make sense.
Wouldn't it have been sexually exploiting the initial idea would have just been just a closed Myspace clone with not much path to success.
He never tried his secret sauce again. He never realized where his actual success was
Oh, I'm sure one day he'll chase the next big thing, but like the proverbial dog who chases the car, what will he do once he catches it?
It almost seems as though the record-setting bonuses they were dolling out to hire the top minds in AI might have been a little shortsighted.
A couple of years ago, I asked a financial investment person about AI as a trick question. She did well by recommending investing in companies that invest in AI (like MS) but who had other profitable businesses (like Azure). I was waiting for her to put her foot in her mouth and buy into the hype.She skillfully navigated the question in a way that won my respect.
I personally believe that a lot of investment money is going to evaporate before the market resets. What we're calling AI will continue to have certain uses, but investors will realize that the moonshot being promised is undeliverable and a lot of jobs will disappear. This will hurt the wider industry, and the economy by extension.
I have an overwhelming feeling that what we're trying to do here is "Netflix over DialUp."
We're clearly seeing what AI will eventually be able to do, just like many VOD, smartphone and grocery delivery companies of the 90s did with the internet. The groundwork has been laid, and it's not too hard to see the shape of things to come.
This tech, however, is still far too immature for a lot of use cases. There's enough of it available that things feel like they ought to work, but we aren't quite there yet. It's not quite useless, there's a lot you can do with AI already, but a lot of use cases that are obvious not only in retrospect will only be possible once it matures.
Some people even figured it out in the 80's. Sears founded and ran Prodigy, a large BBS and eventually ISP. They were trying to set themselves up to become Amazon. Not only that, Prodigy's thing (for a while) was using advertising revenue to lower subscription prices.
Your "Netflix over dialup" analogy is more accessible to this readership, but Sears+Prodigy is my favorite example of trying to make the future happen too early. There are countless others.
Today I learned that Sears founded Prodigy!
Amazing how far that company has fallen; they were sort of a force to be reckoned with in the 70's and 80's with Craftsman and Allstate and Discover and Kenmore and a bunch of other things, and now they're basically dead as far as I can tell.
On the topic of how Sears used to be high-tech: back in 1981, when IBM introduced the IBM PC, it was the first time that they needed to sell computers through retail. So they partnered with Sears, along with the Computerland chain of computer stores, since Sears was considered a reasonable place for a businessperson to buy a computer. To plan this, meetings were held at the Sears Tower, which was the world's tallest building at the time.
Bought my IBM PC from Sears back in the day. Still have the receipt.
Worthy if its own hacker news post. Would love to see it.
Yup I agree GP.
today is the first time I heard of sears and the comment about the sears towers and ibm literally gave me goosebumps.
Wow, I hadn't thought about Computerland for quite a while. That was my go-to to kill some time at the mall when I was a teen.
My favorite anecdote about Sears is from Starbucks current HQs - the HQs used to be a warehouse for Sears. Before renovation the first floor walls next to the elevators used to have Sears' "commitment to customers" (or something like that).
To me it read like it was written by Amazon decades earlier. Something about how Sears promises that customers will be 100% satisfied with the purchase, and if for whatever reason that is not the case customers can return the purchase back to Sears and Sears will pay for the return transportation charges.
Craftsman tools have almost felt like a life-hack sometimes; their no-questions-asked warranties were just incredible.
My dad broke a Craftsman shovel once that he had owned for four years, took it to Sears, and it was replaced immediately, no questions asked. I broke a socket wrench that I had owned for a year and had the same story.
I haven't tested these warranties since Craftsman was sold to Black and Decker, but when it was still owned by Sears I almost exclusively bought Craftsman tools as a result of their wonderful warranties.
FWIW, I bought a Craftsman 1/4" drive ratchet/socket set at a Lowes Home Improvement store last year, and when I got it home and started messing with it, the ratchet jammed up immediately (before even being used on any actual fastener). I drove back over there the next day and the lady at the service desk took a quick look, said "go get another one off the shelf and come back here." I did, and by the time I got back she'd finished whatever paperwork needed to be done, handed me some $whatever and said "have a nice day."
Maybe not quite as hassle free as in years past, but I found the experience acceptable enough.
I think that's as much about Lowes as it is Craftsman... I don't think Craftsman tools have been particularly well build, just that they had and are able to have enough margins to have a no questions asked policy... it probably helps that a lot of the materials are completely and easily recyclable.
It made sense to use the Craftsman screwdriver as a pry bar in a pinch and save the really good one for just turning screws.
> My dad broke a Craftsman shovel once that he had owned for four years, took it to Sears, and it was replaced immediately, no questions asked. I broke a socket wrench that I had owned for a year and had the same story.
This is covered by consumer protection laws in some places. 4 years on a spade would be pushing it, but I’d try with a good one. Here in New Zealand it’s called ‘The Consumer Guarantees Act’. We pay more at purchase time, but we do get something for it.
Lots of tools have lifetime warranties. Harbor Freight's swap process is probably fastest, these days, for folks with one nearby. Tekton's process is also painless, but slower: Send them a photo of the broken tool, and they deliver a new tool to your door.
But I'm not old enough to remember a time when lifetime warranties were unusual. In my lifetimes, a warranty on handtools has always seemed more common than not outside of the bottom-most cheese-grade stuff.
I mean: The Lowes house-brand diagonal cutters I bought for my first real job had a lifetime warranty.
And before my time of being aware of the world, JC Penney sold tools with lifetime warranties.
(I remember being at the mall with my dad when he took a JC Penney-branded screwdriver back to JC Penney -- probably 35 years ago.
He got some pushback from people who insisted that they had never sold tools, and then from people who insisted that they never had warranties, and then he finally found the fellow old person who had worked there long enough to know what to do. Without any hesitation at all, she told us to walk over to Sears, buy a similar Craftsman screwdriver, and come back with a receipt.
So that's what we did.
She took the receipt and gave him his money back.
Good 'nuff.)
Harbor freight is like that too.
harbor freight will take literally anything back, and put it right back on the shelf.
How? It lacks packaging and the tool could be marred up.
I'll add, "if you return it in box".
I bought a hydraulic press. It was missing bolts, has already been assembled before.
A friend bough some wheel Dollie's the threads on the castors were stripped out.
People buy things and use them once for their project, then return them.
The Sears Catalog was the Amazon of its day.
:-) Then it's going to blow your mind that CompuServe (while not founded by them) was a product of H&R Block.
There were quite a few small ISP's in the 1990's. Even Bill Gothard[0] had one.
[0]https://web.archive.org/web/19990208003742/http://characterl...
Prodigy predates ISPs (internet service providers). Before the web had matured a little in 1993 the internet was too technically challenging to interest most consumers except maybe for email, and Prodigy was formed in 1984 -- and although it offered email, it was walled-garden email: a Prodigy user could not exchange email with the internet till the mid-1990s at which time Prodigy might have become an ISP for a few years before going out of business.
At a previous job I worked under a guy who started his own ISP in the early 90’s. I would have loved to have been part of that scene but I was only like four when that happened.
[dead]
Blame short sighted investors asking Sears to "focus"
They weren't wrong. Its core business in what is still a viable-enough sector collapsed. Or if it were truly well-managed, running an ISP and a retailer should have been enough insight to be Amazon.
I worked at Sears at the time when Amazon first started becoming a household name. I for the life of me couldn’t understand why they didn’t make a copycat site called the Sears Catalog Online. But then I think about it and management wanted salesmanship because selling maintenance agreements was their cash cow. Low margin sales won in the long term hence we have Walmart and Amazon as the biggest retailers.
Likely standard management failure. Sears got burned badly when it put its catalog online on Prodigy in the 80's, so obviously online sales were doomed to failure.
Timing is a difficult variable.
It wasn't possible for them to be well managed at the time it mattered. Sears was loaded with debt by private equity ghouls; same story for almost all defunct brick and mortar businesses; Amazon was a factor, but private equity is what actually destroyed them.
Thank you for bringing this up. Sears really didn't have a choice, they were a victim of the most pernicious LBO, Gordon Gecko-style strip mining nonsense on the PE spectrum. All private equity is not the same but after seeing two PE deals from the inside (one a leveraged buy out) and another VC one with the "grow at insane place" playbook I think I prefer the naked and aligned greed of the VC model; PE destroyed both of the other companies while the VC one was already doomed.
And, knowing Jeff Bezos' private equity origins, one could be forgiven for entertaining the thought that none of this was an accident. Just don't be an idiot and, you know, give voice to that thought or anything.
> And, knowing Jeff Bezos' private equity origins
He doesn't have private equity origins as far as I know. He came from DE Shaw, a very well respected and long running hedge fund.
Are you suggesting that Jeff Bezos somehow convinced all his PE buddies to tank Sears (and their own loans to it) in order for him to build Amazon with less competition? Because, well, no offense, but that seems like a remarkably naive understanding of capital markets and individual motivations. Especially when it's well documented how Eddie Lampert's libertarian beliefs caused him to run it into the ground.
They weren't wrong.
Evidence suggests that maybe they were. "Focusing" obviously didn't work.
But at the end of the day, it was private equity and the hubris of a CEO who wasn't nearly as clever as he'd like to have thought he was.
For more on this -- and how Sears had everything it needed (and more) to be what Amazon became -- see this comment from a 2007 MetaFilter thread: https://www.metafilter.com/62394/The-Record-Industrys-Declin...
The untold story, is the names of individuals fighting office politics that lead to that (not) happening.
This is a great example that I hadn't heard of and reminds me of when Nintendo tried to become an ISP when they built the Family Computer Network System in 1988
A16Z once talked about the scars of being too early causes investors/companies to get fixed that an idea will never work. Then some new younger people who never got burned will try the same idea and things will work.
Prodigy and the Faminet probably fall into that bucket along with a lot of early internet companies where they tried things early, got burned and then possibly were too late to capitalise when it was finally the right time for the idea to flourish
Reminds me of Elon not taking a no for an answer. He did it twice, with a massive success.
A true shame to see how he's completely lost track with Tesla, the competition particularly from China is eating them alive. And in space, it's a matter of years until the rest of the world catches up.
And now, he's ran out of tricks - and more importantly, on public support. He can't pivot any more, his entire brand is too toxic to touch.
Lucky for him, the US government is keeping him from being eaten alive in the USA at least.
I remember that one time we tried to drastically limit Japanese imports to protect the American car industry, which basically created the Lexus LS400, one of the best cars ever made.
I dont know, you could argue that maybe GM with the EV1 was the 'too early' EV and Tesla was just at the right moment. Same goes for SpaceX, The idea of a reusable launcher was not a new idea and studied by NASA. I think they did some test vehicles.
SpaceX is an excellent example of this phenomenon. Reusable rockets were "known" to be financially infeasible because the Space Shuttle was so expensive. NASA & oldspace didn't seriously pursue reusable vehicles because the mostly reusable Space Shuttle cost so much more than conventional disposable vehicles.
Similar to how Sears didn't put their catalog online in the 90's because putting it online on Prodigy failed so badly in the 80's.
On the flip side, they didn't actually learn that lesson... that it was a matter of immature tech with relatively limited reach... by the time the mid-90's came through, "the internet is just a fad" was pretty much the sentiment from Sears' leadership...
They literally killed their catalog sales right when they should have been ramping up and putting it online. They could easily have beat out Amazon for everything other than books.
My cousin used to tell me that things works because they were the right thing at the right time. I think he gave the idea of amazon only.
But I guess in startup culture, one has to die trying the idea of right time, as sure one can do surveys to feel like it, but the only way we can ever find if its the right time is the users feedback when its lauched / over time.
Newton at Apple is another great one, though they of course got there.
They sure did. This reminds me of when I was in the local Mac Dealer right after the iPod came out. The employees were laughing together saying “nobody is going to buy this thing”.
the problem is ISP became a Utility, not some fountain of unlimited growth.
What you're arguing is that AI is fundamentally going to be a utility, and while that's worth a floor of cash, it's not what investors or the market clamor for.
I agree though, it's fundamentally a utility, which means theres more value in proper government authority than private interests.
Sears started Prodigy to become Amazon, not Comcast.
The product itself determines wether ita a utility, not the business interest. Assuming democracy works correctly. Only a dysfunctional government ignores natural monopolies.
> We're clearly seeing what AI will eventually be able to do
Are we though? Aside from a narrow set of tasks like translation, grammar, and tone-shifting, LLMs are a dead end. Code generation sucks. Agents suck. They still hallucinate. If you wouldn't trust its medical advice without review from an actual doctor, why would you trust its advice on anything else?
Also, the companies trying to "fix" issues with LLMs with more training data will just rediscover the "long-tail" problem... there is an infinite number of new things that need to be put into the dataset, and that's just going to reduce the quality of responses.
For example: the "there are three 'b's in blueberry" problem was caused by so much training data in response to "there are two r's in strawberry". it's a systemic issue. no amount of data will solve it because LLMs will -never- be sentient.
Finally, I'm convinced that any AI company promising they are on the path to General AI should be sued for fraud. LLMs are not it.
I have a feeling that you believe "translation, grammar, and tone-shifting" works but "code generation sucks" for LLMs because you're good at coding and hence you see its flaws, and you're not in the business of doing translation etc.
Pretty sure if you're going to use LLMs for translating anything non-trivial, you'd have to carefully review the outputs, just like if you're using LLMs to write code.
You know, you're right. It -also- sucks at those tasks because on top of the issue you mention, unedited LLM text is identifiable if you get used to its patterns.
Exactly. Books are still being translated by human translators.
I have a text on my computer, the first couple of paragraphs from the Dutch novel "De aanslag", and every few years I feed it to the leading machine translation sites, and invariably, the results are atrocious. Don't get me wrong, the translation is quite understandable, but the text is wooden, and the translation contains 3 or 4 translation blunders.
GPT-5 output for example:
Far, far away in the Second World War, a certain Anton Steenwijk lived with his parents and his brother on the edge of Haarlem. Along a quay, which ran for a hundred meters beside the water and then, with a gentle curve, turned back into an ordinary street, stood four houses not far apart. Each surrounded by a garden, with their small balconies, bay windows, and steep roofs, they had the appearance of villas, although they were more small than large; in the upstairs rooms, all the walls slanted. They stood there with peeling paint and somewhat dilapidated, for even in the thirties little had been done to them. Each bore a respectable, bourgeois name from more carefree days: Welgelegen Buitenrust Nooitgedacht Rustenburg Anton lived in the second house from the left: the one with the thatched roof. It already had that name when his parents rented it shortly before the war; his father had first called it Eleutheria or something like that, but then written in Greek letters. Even before the catastrophe occurred, Anton had not understood the name Buitenrust as the calm of being outside, but rather as something that was outside rest—just as extraordinary does not refer to the ordinary nature of the outside (and still less to living outside in general), but to something that is precisely not ordinary.
Can you provide a reference translation or at least call out the issues you see with this passage? I see "far far away in the [time period]" which I should imagine should be "a long time ago" What are the other issues?
- "they were more small than large" (what?)
- "even in the thirties little had been done to them" (done to them?)
- "Welgelegen Buitenrust Nooitgedacht Rustenburg" (Untranslated!)
- "his father had first called it Eleutheria" (his father'd rather called it)
- "just as extraordinary does not refer to the ordinary nature of the outside" (complete non-sequitur)
Waar heb je het over? "Welgelegen Buitenrust Nooitgedacht Rustenburg" is volkomen cromulent Engels.
For what it's worth, I do use AI for language learning, though I'm not sure it's the best idea. Primarily for helping translate German news articles into English and making vocabulary flashcards; it's usually clear when the AI has lost the plot and I can correct the translation by hand. Of course, if issues were more subtle then I probably wouldn't catch them ...
Thanks yeah, you’re right these are bad.
Not the original poster, but you can read these paragraphs translated by a human on amazon's sneak peek: https://lesen.amazon.de/sample/B0D74T75KH?f=1&l=de_DE&r=2801...
The difference is gigantic.
By definition, transformers can never exceed average.
That is the thing, and what companies pushing LLMs don't seem to realize yet.
Can you expand on this? For tasks with verifiable rewards you can improve with rejection sampling and search (i.e. test time compute). For things like creative writing it’s harder.
For creative writing, you can do the same, you just use human verifiers rather than automatic ones.
LLMs have encountered the entire spectrum of qualities in its training data, from extremely poor writing and sloppy code, to absolute masterpieces. Part of what Reinforcement Learning techniques do is reinforcing the "produce things that are like the masterpieces" behavior while suppressing the "produce low-quality slop" one.
Because there are humans in the loop, this is hard to scale. I suspect that the propensity of LLMs for certain kinds of writing (bullet points, bolded text, conclusion) is a direct result of this. If you have to judge 200 LLM outputs per day, you prize different qualities than when you ask for just 3. "Does this look correct at a glance" is then a much more important quality.
> Aside from a narrow set of tasks like translation, grammar, and tone-shifting, LLMs are a dead end.
I consider myself an LLM skeptic, but gee saying they are a "dead end" seems harsh.
Before LLM's came along computers understanding human language was graveyard academics when to end their careers in. Now computers are better at it and far faster than most humans.
LLM's also have an extortionary ability to distill and compress knowledge, so much so that you can download a model whose since is measured in GB, and it seems to have a pretty good general knowledge of everything of the internet. Again, far better than any human could do. Yes, the compression is lossy, and yes they consequently spout authoritative sounding bullshit on occasion. But I use them regardless as a sounding board, and I can ask them questions in plain English rather than go on a magical keyword hunt.
Merely being able to understand language or having a good memory is not sufficient to code or do a lot else, on it's own. But they are necessary ingredients for many tasks, and consequently it's hard to imagine a AI that can competently code that doesn't have an LLM as a component.
> it's hard to imagine a AI that can competently code that doesn't have an LLM as a component.
That's just it. LLMs are a component, they generate text or images from a higher-level description but are not themselves "intelligent". If you imagine the language center of your brain being replaced with a tiny LLM powered chip, you would not say it's sentient. it translates your thoughts into words which you then choose to speak or not. That's all modulated by consciousness.
> If you wouldn't trust its medical advice without review from an actual doctor, why would you trust its advice on anything else?
When an LLM gives you medical advice, it's right x% of the time. When a doctor gives you medical advice, it's right y% of the time. During the last few years, x has gone from 0 to wherever it is now, while y has mostly stayed constant. It is not unimaginable to me that x might (and notice I said might, not will) cross y at some point in the future.
The real problem with LLM advice is that it is harder to find a "scapegoat" (particularly for legal purposes) when something goes wrong.
Microsoft claims that they have an AI setup that outperforms human doctors on diagnosis tasks: https://microsoft.ai/new/the-path-to-medical-superintelligen...
"MAI-DxO boosted the diagnostic performance of every model we tested. The best performing setup was MAI-DxO paired with OpenAI’s o3, which correctly solved 85.5% of the NEJM benchmark cases. For comparison, we also evaluated 21 practicing physicians from the US and UK, each with 5-20 years of clinical experience. On the same tasks, these experts achieved a mean accuracy of 20% across completed cases."
Of course, AI "doctors" can't do physical examinations and the best performing models cost thousands to run per case. This is also a test of diagnosis, not of treatment.
If you consider how little time doctors have to look at you (at least in Germanys half broken public health sector) and how little they actually care ...
I think x is already higher than y for me.
That's fair. Reliable access to a 70% expert is better than no access to a 99% expert.
Or maybe not. Scaling AI will require an exponential increase in compute and processing power, and even the current LLM models take up a lot of resources. We are already at the limit of how small we can scale chips and Moore’s law is already dead.
So newer chips will not be exponentially better but will be more of incremental improvements, so unless the price of electricity comes down exponentially we might never see AGI at a price point that’s cheaper than hiring a human.
Most companies are already running AI models at a loss, scaling the models to be bigger(like GPT 4.5) only makes them more expensive to run.
The reason why internet, smartphones and computers have seen exponential growth from the 90s is due to underlying increase in computing power. I personally used a 50Mhz 486 in the 90s and now use a 8c/16t 5Ghz CPU. I highly doubt if we will see the same form of increase in the next 40 years
Scaling AI will require an exponential increase in compute and processing power,
A small quibble... I'd say that's true only if you accept as an axiom that current approaches to AI are "the" approach and reject the possibility of radical algorithmic advances that completely change the game. For my part, I have a strongly held belief that there is such an algorithmic advancement "out there" waiting to be discovered, that will enable AI at current "intelligence" levels, if not outright Strong AI / AGI, without the absurd demands on computational resources and energy. I can't prove that of course, but I take the existence of the human brain as an existence proof that some kind of machine can provide human level intelligence without needing gigawatts of power and massive datacenters filled with racks of GPU's.
If we suppose that ANNs are more or less accurate models of real neural networks, the reason why they're so inefficient is not algorithmic, but purely architectural. They're just software. We have these huge tables of numbers and we're trying to squeeze them as hard as possible through a relatively small number of multipliers and adders. Meanwhile, a brain can perform a trillion fundamental simultaneously because every neuron is a complete processing element independent of every other one. To bring that back into more concrete terms, if we took an arbitrary model and turned it into a bespoke piece of hardware, it would certainly be at least one or two orders of magnitude faster and more efficient, with the downside that since it's dead silicon it could not be changed and iterated on.
If you account for the fact that biological neurons operate at a much lower frequency than silicon processors, then the raw performance gets much closer. From what I can find, neuron membrane time constant is around 10ms [1], meaning 10 billion neurons could have 1 trillion activations per second, which is in the realm of modern hardware.
People mentioned in [2] have done the calculations from a more informed position than I have, and reach numbers like 10^17 FLOPS when doing a calculation that resembles this one.
[1] https://spectrum.ieee.org/fast-efficient-neural-networks-cop...
[2] https://aiimpacts.org/brain-performance-in-flops/
the reason why they're so inefficient is not algorithmic, but purely architectural.
I would agree with that, with the caveat that in my mind "the architecture" and "the algorithm" are sort of bound up with each other. That is, one implies the other -- to some extent.
And yes, fair point that building dedicated hardware might just be part of the solution to making something that runs much more efficiently.
The only other thing I would add, is that - relative to what I said in the post above - when I talk about "algorithmic advances" I'm looking at everything as potentially being on the table - including maybe something different from ANN's altogether.
The energy inefficiency of ANNs vs our brain is mostly because our brain operates in async dataflow mode with each neuron mostly consuming energy only when it fires. If a neuron's inputs haven't changed then it doesn't redundantly "recalculate it's output" like an ANN - it just does nothing.
You could certainly implement an async dataflow type design in software, although maybe not as power efficiently as with custom silicon, but individual ANN node throughput performance would suffer given the need to aggregate neurons needing updates into a group to be fed into one the large matrix multiplies that today's hardware is optimized for, although sparse operations are also a possibility. OTOH conceivably one could save enough FLOPs that it'd still be a win in terms of how fast an input could be processed through an entire neural net.
> If we suppose that ANNs are more or less accurate models of real neural networks
i believe the problem is we don't understand actual neurons let alone actual networks of neurons to even know if any model is accurate or not. The AI folks cleverly named their data structures "neuron" and "neural network" to make it seem like we do.
This is a bit of a cynical take. Neural networks have been "a thing" for decades. A quick google suggests 1940s. I won't quibble on the timeline but no-one was trying to trick anyone with the name back then, and it just stuck around.
> If we suppose that ANNs are more or less accurate models of real neural networks [..]
IANNs were inspired by biological neural structures and that's it. They are not representative models at all, even of the "less" variety. Dedicated hardware will certainly help, but no insights into how much it can help will come from this sort of comparison.
Could you explain your claim that ANNs are nothing like real neural networks beyond their initial inspiration (if you'll accept my paraphrasing). I've seen it a few times on HN, and I'm not sure what people mean by it.
By my very limited understanding of neural biology, neurons activate according to inputs that are mostly activations of other neurons. A dot product of weights and inputs (i.e. one part of matrix multiplication) together with a threshold-like function doesn't seem like a horrible way to model this. On the other hand, neurons can get a bit fancier than a linear combination of inputs, and I haven't heard anything about biological systems doing something comparable to backpropogation, but I'd like to know whether we understand enough to say for sure that they don't.
>I haven't heard anything about biological systems doing something comparable to backpropogation
The brain isn't organized into layers like ANNs are. It's a general graph of neurons and cycles are probably common.
Actually that's not true. Our neocortex - the "crumpled up" outer layer of our brain, which is basically responsible for cognition/intelligence, has a highly regular architecture. If you uncrumpled it, it'd be a thin sheet of neurons about the size of a teatowel, consisting of 6 layers of different types of neurons with a specific inter-layer and intra-layer pattern of connections. It's not a general graph at all, but rather a specific processing architecture.
None of what you've said contradicts it's a general graph instead of, say, a DAG. It doesn't rule out cyles either within a single layer or across multiple layers. And even if it did, the brain is not just the neocortex, and the neocortex isn't isolated from the rest of the topology.
It's a specific architecture. Of course there are (massive amounts) of feedback paths, since that's how we learn - top-down prediction and bottom-up sensory input. There is of course looping too - e.g. thalamo-cortical loop - we are not just as pass-thru reactionary LLM!
Yes, there is a lot more structure to the brain than just the neocortex - there are all the other major components (thalamus, hippocampus, etc) each with their own internal arhitecture, and then specific patterns of interconnect between them...
This all reinforces what I am saying - the brain is not just some random graph - it is a highly specific architecture.
Did I say "random graph", or did I say "general graph"?
>There is of course looping too - e.g. thalamo-cortical loop - we are not just as pass-thru reactionary LLM!
Uh-huh. But I was responding to a comment about how the brain doesn't do something analogous to back-propagation. It's starting to sound like you've contradicted me to agree with me.
I didn't say anything about back-progagation, but if you want to talk about that then it depends on how "analogous" you want to consider ...
It seems very widely accepted that the neocortex is a prediction machine that learns by updating itself based on sensory detection of top-down prediction failures, and with multiple layers (cortical patches) of pattern learning and prediction, there necessarily has to be some "propagation" of prediction error feedback from one layer to another, so that all layers can learn.
Now, does the brain learn in a way directly equivalent to backprop in terms of using exact error gradients or a single error function? No - presumably not, it more likely works in layered fashion with each higher level providing error feedback to the layer below, with that feedback likely just being what was expected vs what was detected (i.e. not a gradient - essentially just a difference). Of course gradients are more efficient in terms of selecting varying update step sizes, but directional would work fine too. It would also not be surprising if evolution has stumbled upon something similar to Bayesian updates in terms of how to optimally incrementally update beliefs (predictions) based on conflicting evidence.
So, that's an informed guess of how our brain is learning - up to you whether you want to regard that as analogous to backprop or not.
Neurons don't just work on electrical potentials, they also have a multiple whole systems of neurotransmitters that affect their operation. So I don't think their activation is a continuous function. Although I suppose we could use non-continuous functions for activations in a NN, I don't think there's an easy way to train a NN that does that.
Sure, a real neuron activates by outputting a train of spikes after some input threshold has been crossed (a complex matter of synapse operation - not just a summation of inputs), while in ANNs we use "continuous" activation functions like ReLU... But note that the output of a ReLu, while continuous, is basically on or off, equivalent to a real neuron having crossed it's activation threshold or not.
If you really wanted to train artificial spiking neural networks in biologically plausible fashion then you'd first need to discover/guess what that learning algorithm is, which is something that has escaped us so far. Hebbian "fire together, wire together" may be part of it, but we certainly don't have the full picture.
OTOH, it's not yet apparent whether an ANN design that more closely follows real neurons has any benefit in terms of overall function, although an async dataflow design would be a lot more efficient in terms of power usage.
Deepmind where experimenting with this https://github.com/google-deepmind/lab a few years ago.
Having AI agents learn to see, navigate and complete tasks in a 3d environment. I feel like it had more potential than LLMs to become an AGI (if that is possible).
They haven't touched it in a long time though. But Genie 3 makes me think they haven't completely dropped it.
> Scaling AI will require an exponential increase in compute and processing power,
I think there is something more happening with AI scaling; I think the scaling factor per user is a lot higher and a lot more expensive. Compare to the big initial internet companies. You added one server you could handle thousands more users; incremental cost was very low, not to mention the revenue captured through whatever adtech means. Not so with AI workloads; they are so much more expensive than ad revenue it's hard to break even even with an actual paid subscription.
I dont fully even get why; inference costs are way lower than training costs no?
We know for a fact that human level general intelligence can be achieved on a relatively modest power budget. A human brain runs on somewhere from about 20-100W, depending on how much of the rest of the body's metabolism you attribute to supporting it.
The fact that the human brain, heck all brains, are so much more efficient than “state of the art” nnets, in terms of architecture, power consumption, training cost, what have you … while also being way more versatile and robust … is what convinces me that this is not the path that leads to AGI.
> We are already at the limit of how small we can scale chips
I strongly suspect this is not true for LLMs. Once progress stabilizes, doing things like embedding the weights of some model directly as part of the chip will suddenly become economical, and that's going to cut costs down dramatically.
Then there's distillation, which basically makes smaller models get better as bigger models get better. You don't necessarily need to run a big model al of the time to reap its benefits.
> so unless the price of electricity comes down exponentially
This is more likely than you think. AI is extremely bandwidth-efficient and not too latency-sensitive (unlike e.g. Netflix et al), so it's pretty trivial to offload AI work to places where electricity is abundant and power generation is lightly regulated.
> Most companies are already running AI models at a loss, scaling the models to be bigger(like GPT 4.5) only makes them more expensive to run.
"We're profitable on inference. If we didn't pay for training, we'd be a very profitable company." Sam Altman, OpenAI CEO[1].
[1] https://www.axios.com/2025/08/15/sam-altman-gpt5-launch-chat...
>doing things like embedding the weights of some model directly as part of the chip will suddenly become economical, and that's going to cut costs down dramatically.
An implementation of inference on some specific ANN in fixed function analog hardware can probably pretty easily beat a commodity GPU by a couple orders of magnitude in perf per watt too.
> "We're profitable on inference. If we didn't pay for training, we'd be a very profitable company."
That's OpenAI (though I'd be curious if that statement holds for subscriptions as opposed to API use). What about the downstream companies that use OpenAI models? I'm not sure the picture is as rosy for them.
[dead]
> The groundwork has been laid, and it's not too hard to see the shape of things to come.
The groundwork for VR has also been laid and it's not too hard to see the shape of things to come. Yet VR hasn't moved far beyond the previous hype cycle 10 years ago, because some problems are just really, really hard to solve.
Is it still giving people headaches and making them nauseous?
Yes, it still gives people headaches because the convergence-accommodation conflict remains unsolved. We have a few different technologies to address that, but they're expensive, don't fully address the issue, and none of them have moved beyond the prototype stage.
Motion sickness can be mostly addressed with game design, but some people will still get sick regardless of what you try. Mind you, some people also get motion sick by watching a first-person shooter on a flat screen, so I'm not sure we'll ever get to a point where no one ever gets motion sick in VR.
> Mind you, some people also get motion sick by watching a first-person shooter on a flat screen
Yep I'm that guy. I blame it on being old.
As someone who was a customer of Netflix from the dialup to broadband world, I can tell you that this stuff happens much faster than you expect. With AI we're clearly in the "it really works, but there are kinks and scaling problems" of, say, streaming video in 2001 -- whereas I think you mean to indicate we're trying to do Netflix back in the 1980s where the tech for widespread broadband was just fundamentally not available.
Oh, like RealPlayer in the late 90's (buffering... buffering...)
RealPlayer in the late 90s turned into (working) Napster, Gnutella and then the iPod in 2001, Podcasts (without the name) immediately after, with the name in 2004, Pandora in 2005, Spotify in 2008. So a decade from crummy idea to the companies we’re familiar with today, but slowed down by tremendous need for new (distributed) broadband infrastructure and complicated by IP arrangements. I guess 10 years seems like a long time from the front end, but looking back it’s nothing. Don’t go buying yourself a Tower Records.
While I get the point... to be pedantic though, Napster (first gen), Gnutella and iPod were mostly download and listen offline experiences and not necessarily live streaming.
Another major difference, is we're near the limits to the approaches being taking for computing capability... most dialup connections, even on "56k" modems were still lucky to get 33.6kbps down and very common in the late 90's, where by the mid-2000's a lot of users had at least 512kbps-10mbps connections (where available) and even then a lot of people didn't see broadband until the 2010's.
that's at least a 15x improvement, where we are far less likely to see even a 3-5x improvement on computing power over the next decade and a half. That's also a lot of electricity to generate on an ageing infrastructure that barely meets current needs in most of the world... even harder on "green" options.
I moved to NYC in 1999 and got my first cable modem that year. This meant I could stream AAC audio from a jukebox server we maintained at AT&T Labs. So for my unusual case, streaming was a full-fledged reality I could touch back then. Ironically running a free service was easy, but figuring out how to get people (AKA the music industry) to let us charge for the service was impossible. All that extra time was just waiting for infrastructure upgrades to spread across a whole country to the point that there were enough customers that even the music industry couldn’t ignore the economics; none of the fundamental tech was missing. With LLMs I have access to a pretty robust set of models for about $20/mo (I’m assuming these aren’t 10x loss leaders?), plus pretty decent local models for the price of a GPU. What’s missing this time is that the nature of the “business” being offered is much more vague, plus the reliability isn’t quite there yet. But on the bright side, there’s no distributed infrastructure to build.
>I have an overwhelming feeling that what we're trying to do here is "Netflix over DialUp."
I totally agree with you... though the other day, I did think the same thing about the 8bit era of video games.
It's a logical fallacy that just because some technology experienced some period of exponential growth, all technology will always experience constant exponential growth.
There are plenty of counter-examples to the scaling of computers that occurred from the 1970s-2010s.
We thought that humans would be traveling the stars, or at least the solar system, after the space race of the 1960s, but we ended up stuck orbiting the earth.
Going back further, little has changed daily life more than technologies like indoor plumbing and electric lighting did in the late 19th century.
The ancient Romans came up with technologies like concrete that were then lost for hundreds of years.
"Progress" moves in fits and starts. It is the furthest thing from inevitable.
Most growth is actually logistic. An S shaped curve that starts exponential but slows down rapidly as it reaches some asymptote. In fact basically everything we see as exponential in the real world is logistic.
True, but adoption of AI has certainly seen exponential growth.
Improvement of models may not continue to be exponential.
But models might be good enough, at this point it seems more like they need integration and context.
I could be wrong :)
At what cost though? Most AI operations are losing money, using a lot of power, including massive infrastructure costs, not to mention the hardware costs to get going, and that isn't even covering the level of usage many/most want, and certainly aren't going to pay even $100s/month per person that it currently costs to operate.
This is a really basic way to look at unit economics of inference.
I did some napkin math on this.
32x H100s cost 'retail' rental prices about $2/hr. I would hope that the big AI companies get it cheaper than this at their scale.
These 32 H100s can probably do something on the order of >40,000 tok/s on a frontier scale model (~700B params) with proper batching. Potentially a lot more (I'd love to know if someone has some thoughts on this).
So that's $64/hr or just under $50k/month.
40k tok/s is a lot of usage, at least for non-agentic use cases. There is no way you are losing money on paid chatgpt users at $20/month on these.
You'd still break even supporting ~200 Claude Code-esque agentic users who were using it at full tilt 40% of the day at $200/month.
Now - this doesn't include training costs or staff costs, but on a pure 'opex' basis I don't think inference is anywhere near as unprofitable as people make out.
My thought is closer to the developer user who would want to have their codebase as part of the queries along with heavy use all day long... which is closer to my point that many users are less likely to spend hundreds a month, at least with the current level of results people get.
That said, you could be right, considering Claude max's price is $100/mo... but I'm not sure where that is in terms of typical, or top 5% usage and the monthly allowance/usage.
> True, but adoption of AI has certainly seen exponential growth.
I mean, for now. The population of the world is finite, and there's probably a finite number of uses of AI, so it's still probably ultimately logistic
> I did think the same thing about the 8bit era of video games.
Can you elaborate? That sounds interesting.
too soon to get it to market, though it obviously all sold perfectly well, people were sufficiently wowed by it
Speaking of Netflix -
I think the image, video, audio, world model, diffusion domains should be treated 100% separately from LLMs. They are not the same thing.
Image and video AI is nothing short of revolutionary. It's already having huge impact and it's disrupting every single business it touches.
I've spoken with hundreds of medium and large businesses about it. They're changing how they bill clients and budget projects. It's already here and real.
For example, a studio that does over ten million in revenue annually used to bill ~$300k for commercial spots. Pharmaceutical, P&G, etc. Or HBO title sequences. They're now bidding ~$50k and winning almost everything they bid on. They're taking ten times the workload.
Fwiw LLMs are also revolutionary. There's currently more anti-AI hype than AI hype imho. As in there's literally people claiming it's completely useless and not going to change a thing. Which is crazy.
That’s an anecdote about intensity, not volume. The extremes on both sides are indeed very extreme (no value, replacing most white collar jobs next year).
IME the volume is overwhelming on the pro-LLM side.
Yeah the conversation on both extremes feels almost religious at times. The pro LLM hype feels more disconcerting sometimes because there are literally billions if not trillions of dollars riding on this thing, so people like Sam Altman have a strong incentive to hype the shit out of it.
One sides extremes says LLM wont change a thing, the other sides extremes says LLM will end the world.
I don't think the ones saying it wont change a thing are the most extreme here.
Luckily for humanity reality is somewhere in between extremes, right?
You're right, and I also think LLMs have an impact.
The issue is the way the market is investing they are looking for massive growth, in the multiples.
That growth can't really come from trading cost. It has to come from creating new demand for new things.
I think that's what not happened yet.
Are diffusion models increasing the demand for video and image content? Is it having customers spend more on shows, games, and so on? Is it going to lead to the creation of a whole new consumption medium ?
> Is it going to lead to the creation of a whole new consumption medium ?
Good question? Is that necessary, or is it sufficient for AI to be integrated in every kind of CAD/design software out there?
Because I think most productivity tools whether CAD, EDA, Office, graphic 2d/3d design, etc will benefit from AI. That's a huge market.
I guess there are two markets to consider.
The market of the AI foundation models itself, will they have customers long term willing to pay a lot of money for access to the models?
I think yes, there will be demand for foundational AI models, and a lot of it.
The second market is the market of CAD, EDA, Office, graphic 2d/3d design, etc. This market will not grow because they integrate AI into their products, or that is the question, will it? Otherwise, you could almost hypothesize these market will shrink as AI is going to be for them an additional cost of business that customers will expect to be included. Or maybe they manage to sell to their customers a premium for the AI features where they take a cut above that of what they pay the foundational models under the hood, that's a possibility.
I see the point at the moment on “low quality advertising”, but we are still far from high quality video generated for AI.
It’s the equivalent of those cheap digital effects. They look bad for a Hollywood movie, but it allows students to shot their action home movies
You're looking at individual generations. These tools aren't for casual users expecting to 1-shot things.
The value is in having a director, editor, VFX compositor pick and choose from amongst the outputs. Each generation is a single take or simulation, and you're going to do hundreds or thousands. You sift through that and explore the latent space, and that's where you find your 5-person Pixar.
Human curated AI is an exoskeleton that enables small teams to replace huge studios.
Is there any example of an AI generated film like this that is actually coherent? I've seen a couple short ones that are basically just vibe based non-linear things.
https://www.instagram.com/aist.aistories/ has tons of counterpoints, the Sam Altman vs Zuckerberg matrix recreation has been making the rounds...
I recommend following specific artists rather than the medium as a whole.
Anything by sketch comedian Carter Jay Allen:
https://www.youtube.com/@OfficialArtCraftStudios/videos
https://www.youtube.com/watch?v=H4NFXGMuwpY - Marvel parody
https://www.youtube.com/watch?v=tAAiiKteM-U - DC parody
https://www.youtube.com/watch?v=Tii9uF0nAx4 - here's him compositing real life actors with AI.
"Bots in the Hall", a fairly prolific Hollywood film and TV writer who wants to remain unnamed:
https://www.youtube.com/@BotsInTheHall/videos
https://www.youtube.com/watch?v=FAQWRBCt_5E - "Paywall Sphinx" is pretty good.
"Meta Puppet", who works for one of the big AI studios,
https://www.youtube.com/watch?v=vtPcpWvAEt0 - "Plastic" doesn't look great, but it keeps getting crazier as you watch it
https://www.metapuppet.ai/
Some of the festival winners purposely stay away from talking since AI voices and lipsync are terrible, eg. "Poof" by the infamous "Pizza Later" (who is responsible for "Pepperoni Hug Spot") :
https://www.youtube.com/watch?v=t_SgA6ymPuc
"Talk Boys", who only posts on Reddit:
https://www.reddit.com/user/talkboys/
https://www.reddit.com/r/aivideo/comments/1ime5m8/birdwatche...
Marcos Higueras, an animator fully embracing AI:
https://www.youtube.com/watch?v=OCZC6XmEmK0
Most of the professional AI usage is still winding up in commercial use cases where you don't even know it's been used at all.
It's quite incredible how fast the generative media stuff is moving.
The self-hostable models are improving rapidly. How capable and accessible WAN 2.2 (text+image to video; fully local if you have the VRAM) is feels unimaginable from last year when OpenAI released Sora (closed/hosted).
As long as you do not make ads with four-fingered hands, like those clowns ... :)
https://www.lapresse.ca/arts/chroniques/2025-07-08/polemique...
https://www.npr.org/2025/06/23/nx-s1-5432712/ai-video-ad-kal...
Typical large team $300,000 ad made for < $2,000 in a weekend by one person.
It's going to be a bloodbath.
> Kalshi's Jack Such declined to disclose Accetturo's fee for creating the ad. But, he added, "the actual cost of prompting the AI — what is being used in lieu of studios, directors, actors, etc. — was under $2,000."
So in other words, if you ignore the costs of paying people to create the ad, it barely costs anything. A true accounting miracle!
Do you pay people to pump your gas?
How about harvesting your whale blubber to power your oil lamp at night?
The nature of work changes all the time.
If an ad can be made with one person, that's it. We're done. There's no going back to hiring teams of 50 people.
It's stupid to say we must hire teams of 50 to make an advertisement just because. There's no reason for that. It's busy work. The job is to make the ad, not to give 50 people meaningless busy work.
And you know what? The economy is going to grow to accommodate this. Every single business is now going to need animated ads. The market for video is going to grow larger than we've ever before imagined, and in ways we still haven't predicted.
Your local plumber is going to want a funny action movie trailer slash plumbing advertisement to advertise their services. They wouldn't have even been in the market before.
You're going to have silly videos for corporate functions. Independent filmmakers will be making their own Miyazaki and Spielberg epics that cater to the most niche of audiences - no more mass market Marvel that has to satisfy everybody, you're going to see fictional fantasy biopic reimaginings of Grace Hopper fighting the vampire Nazis. Whatever. There'll be a market for everything, and 100,000 times as many creators with actual autonomy.
In some number of years, there is going to be so much more content being produced. More content in single months than in all human history up to this point. Content that caters to the very long tail.
And you know what that means?
Jobs out the wazoo.
More jobs than ever before.
They're just going to look different and people will be doing more.
> Your local plumber is going to want a funny action movie trailer slash plumbing advertisement to advertise their services. They wouldn't have even been in the market before.
And why would your local plumber hire someone to produce this funny action trailer (which I'm not convinced would actually help them from an advertising perspective), when they can simply have an AI produce that action funny action trailer without hiring anyone? Assuming models improve sufficiently that will become trivially possible.
> Independent filmmakers will be making their own Miyazaki and Spielberg epics that cater to the most niche of audiences - no more mass market Marvel that has to satisfy everybody, you're going to see fictional fantasy biopic reimaginings of Grace Hopper fighting the vampire Nazis.
Well, first of all, if the audience is "the most niche of audiences", then I'm not sure how that's going to lead to a sustainable career. And again -- if I want to see my niche historical fantasy interests come to life in a movie about Grace Hopper fighting vampire Nazis, why will I need a filmmaker to create this for me when I can simply prompt an AI myself? "Give me a fun action movie that incorporates famous computer scientists fighting Nazis. Make it 1.5 hours long, and give it a comedic tone."
I think you're fundamentally overvaluing what humans will be able to provide in an era where creating content is very cheap and very easy.
This ad was purposefully playing off the fact that it was AI though, it was a large amount of short bizarre things like two old women selling Fresh Manatee out of the back of a truck. You couldn't replace a regular ad with this.
I've got friends at WPP. Heads are rolling.
This is very much real and happening as we speak.
oh no the poor advertisers
Cheaper poorer quality ads means a bad time for us, people who are being incessantly targeted by this crap.
Websites are already finding creative ways around DNS blocklists for ads serving.
> Netflix over DialUp
https://en.wikipedia.org/wiki/RealNetworks
There’s no evidence that it’ll scale like that. Progress in AI has always been a step function.
There's also no evidence that it won't, so your opinion carries exactly the same weight as theirs.
> Progress in AI has always been a step function.
There's decisively no evidence of that, since whatever measure you use to rate "progress in AI" is bound to be entirely subjective, especially with such a broad statement.
> There's also no evidence that it won't
There are signs, though. Every "AI" cycle, ever, has revolved around some algorithmic discovery, followed by a winter in search for the next one. This one is no different and propped up by LLMs, whose limitations we know quite well by now: "intelligence" is elusive, throwing more compute at them produces vastly diminishing returns, throwing more training data at them is no longer feasible (we came short of it even before the well got poisoned). Now the competitors are stuck at the same level, within percent points of one another, with the difference explained by fine-tuning techniques and not by technical prowess. Unless a cool new technique come yesterday to dislodge LLMs, we are in for a new winter.
Oh, I believe that while LLMs are a dead end now, the applications of AI in vision and physical (i.e. robots with limbs) world will usher in yet another wrecking of the lower classes of society.
Just as AI has killed off all demand for lower-skill work in copywriting, translation, design and coding, it will do so for manufacturing. And that will be a dangerous bloodbath because there will not be enough juniors any more to replace seniors aging out or quitting in frustration of being reduced to cleaning up AI crap.
What is your definition of "evidence" here? The evidence, in my view, are physical (as in, available computing power) and algorithmic limitations.
We don't expect steel to suddenly have new properties, and we don't expect bubble sort to suddenly run in O(n) time. You could ask -- well what is the evidence they won't, but it's a silly question -- the evidence is our knowledge of how things work.
Saying that improvement in AI is inevitable depends on the assumption of new discoveries and new algorithms beyond the current corpus of machine learning. They may happen, or they may not, but I think the burden of proof is higher on those spending money in a way that assumes it will happen.
I don’t follow. We have benchmarks that have survived decades and illustrate the steps.
What do you call GPT 3.5?
rodent -> homo sapiens brain scales just fine? It's tenuous evidence, but not zero.
Uh it’s been multiple repeated step ups in the last 15 years. The trend line is up up up.
The innovation here is that the step function didn't traditionally go down
Is some potential AGI breakthrough in the future going to be from LLMs or will they plateau in terms of capabilities?
Its hard for me to imagine Skynet growing from chatgpt
The old story of paperclip AI shows that AGI is not needed for sufficiently smart computer to be dangerous.
I'm starting to agree with this viewpoint. As the technology seems to solidify to roughly what we can do now, the aspirations are going to have to get cut back until there's a couple more breakthroughs.
I'm not convinced that the immaturity of the tech is what's holding back the profits. The impact and adoption of the tech are through the roof. It has shaken the job market across sectors like I've never seen before. My thinking is that if the bubble bursts, it won't be because the technology failed to deliver functionally; it will be because the technology simply does not become as profitable to operate as everyone is betting right now.
What will it mean if the cutting edge models are open source, and being OpenAI effectively boils down to running those models in your data center? Your business model is suddenly not that different from any cloud service provider; you might as well be Digital Ocean.
> We're clearly seeing what AI will eventually be able to do
I think this is one of the major mistakes of this cycle. People assume that AI will scale and improve like many computing things before it, but there is already evidence scaling isn't working and people are putting a lot of faith in models (LLMs) structurally unsuited to the task.
Of course that doesn't mean that people won't keep exploiting the hype with hand-wavy claims.
> A couple of years ago, I asked a financial investment person about AI as a trick question. She did well by recommending investing in companies that invest in AI (like MS) but who had other profitable businesses (like Azure)
If you had actually invested in AI pure players and Nvidia, the shovel seller, a couple years ago and were selling today, you would have made a pretty penny.
The hard thing with potential bubbles is not entirely avoiding them, it’s being there early enough and not being left at the end holding the bag.
Financial advisors usually work on wholistic plans not short term ones. It isn't about timing markets its about a steady hand that doesn't panic and makes sure you don't get caught with your pants down when you need cash.
Hard to know what OP asked for but if they asked for AI specifically, the advise does not need to be holistic.
Are you bearish on the shovel seller? Is now the time to sell out? I'm still +40% on nvda - quite late to the game but people still seem to be buying the shovels.
Personal opinion, I'm bearish on the shovel seller long term because the companies that are training AI are likely to build their own hardware. Google already does this. Seems like a matter of time for the rest of the mag 7 to join. The rest of the buyers aren't growing enough to offset that loss imo.
FWIW, Nvidia's moat isn't hardware and they know this (they even talk about it). Hardware wise AMD is neck and neck with them, but AMD still doesn't have a CUDA equivalent. CUDA is the moat. As painful as it is to use, there's a long way to go for companies like AMD to compete here. Their software is still pretty far behind, despite their rapid and impressive advancements. It will also take time to get developer experience to saturate within the market, and that will likely mean AMD needs some good edge over Nvidia, like adding things Nvidia can't do or being much more cost competitive. And that's not something like adding more VRAM or just taking smaller profit margins because Nvidia can respond to those fairly easily.
That said, I still suggested the parent sell. Real money is better than potential money. Classic gambler's fallacy, right? FOMO is letting hindsight get in the way of foresight.
What's the old Rockefeller clique? When your shoe shiner is giving you stock advice it is time to sell (may have heard the taxicab driver version).
It depends on how risk adverse you are and how much money you have there.
If you're happy with those returns, sell. FOMO is dumb. You can't time the market, the information just isn't available. If those shares are worth a meaningful amount of money, sell. Take your wins and walk away. A bird in your hand is worth more than two in the bush, right? That money isn't worth anything until it is realized[0].
Think about it this way: how much more would you need to make to risk making nothing? Or losing money? This is probably the most important question when investing.
If you're a little risk adverse or a good chunk of your profile is in it, sell 50-80% of it and then diversify. You're taking wins and restructuring.
If you wanna YOLO, then YOLO.
My advice? Don't let hindsight get in the way of foresight.
[0] I had some Nvidia stocks at 450 and sold at 900 (before the split, so would be $90 today). I definitely would have made more money if I kept them. Almost double if I sold today! But I don't look back for a second. I sold those shares and was able to pay off my student debt. Having this debt paid off is still a better decision in my mind because I can't predict the future. I could have sold 2 weeks later and made less! Or even in April of this year and made the same amount of money.
I have absolutely no clue whatsoever. I have zero insider information. For all I know, the bubble could pop tomorrow or we might be at the beginning of a shift of a similar magnitude to the industrial revolution. If I could reliably tell, I wouldn’t tell you anyway. I would be getting rich.
I’m just amused by people who think they are financially more clever by taking conservative positions. At that point, just buy ETF. That’s even more diversification that buying Microsoft.
It boggles the mind that this kind of management is what it takes to create one of the most valuable companies in the world (and becoming one of the world's richest in the process).
It's a cliche but people really underestimate and try to downplay the role of luck[0].
[0] https://www.scientificamerican.com/blog/beautiful-minds/the-...
Luck. And capturing strong network effect.
The ascents of the era all feel like examples of anti-markets, of having gotten yourself into an intermediary position where you control both side's access.
People also underestimate the value of maximizing opportunities for luck. If we think of luck as random external chance that we can't control, then what can we control? Doing things that increase your exposure to opportunities without spreading yourself too thin is the key. Easier said than done to strike that balance, but getting out there and trying a lot of things is a viable strategy even if only a few of them pay off. The trick is deciding how long to stick with something that doesn't appear to be working out.
Sure helps to be born wealthy, go to private school, and Ivy League college.
Success happens when luck meets hard work.
And timing
Ability vastly increases your luck surface area. A single poker hand has a lot of luck, and even a game, but over long periods, ability starts to strongly differentiate peoples' results.
Win a monster pot and you can play a lot of more interesting hands.
Except you can play hundreds of thousands of poker hands in your lifetime, but only have time/energy/money to start a handful of businesses.
Sure, but within running a single business, there are a huge number of individual events. Those are the hands.
That's where the analogy starts to fall apart then. Because the variance in those decisions is not very similar, since you're sampling very different underlying distributions. And estimating the priors for a problem like "what is the optimal arrangement of tables to maximize throughput in a cafe" is very different from a problem like "what is the current untapped/potential demand for a boardgaming cafe in this city, and how profitable would that business be".
The main reason why professional poker players are playing the long-game, is because they're consistently playing the same game. Over and over.
Heh yes, it's not as controlled, but there are repeated tasks like analysis, communicating, intuiting things, creating things, etc. And the tasks have more variability, but if you're better at these skills, you'll tend to do better. And if you do much better at a lot of them, then you're more likely to succeed than someone working on the same business who isn't very good at them. Starting a business is also a long game with a lot of these subtasks.
[flagged]
This might be true for a normal definition of success, but not lottery-winner style success like Facebook. If you look at Microsoft, Netflix, Apple, Amazon, Google, and so on, the founders all have few or zero previous attempts at starting a business. My theory is that this leads them to pursue risky behavior that more experienced leaders wouldn't try, and because they were in the right place at the right time, that earned them the largest rewards.
Not true of Netflix, founder came from PayPal. Apple required founder to leave and learn with a bunch of other companies like Pixar and next.
What "massive string of failed attempts" did Zuckerberg or Bezos ever accumulate?
They failed to not go to an Ivy League school and failed to have poor parents.
Or Gates or Buffet.
That claim is just patently false.
Alexa, Metaverse, being decent human beings
When you are still one of the top 3 richest people in the world after your mistake, that is not a "failure" in the way normal people experience it. That is just passing the time.
This is just cope for people with a massive string of failed attempts and no successes.
Daddy's giving you another $50,000 because he loves you, not because he expects your seventh business (blockchain for yoga studio class bookings) is going to go any better than the last six.
IMO this strengthens the case for luck. If the probability of winning the lottery is P, then trying N times gives you a probability of 1-(1-P)^N.
Who's more likely to win, someone with one lottery ticket or someone with a hundred?
"Socialism never took root in America because the poor see themselves not as an exploited proletariat but as temporarily embarrassed millionaires."
Some will read this and laser in on the "socialism" part, but obviously the interesting bit is the second half of the quote.
That phrase explains the US Health Care System
this is also just cope
Every billionaire could have died from childhood cancer.
Past a certain point, skill doesn't contribute to the magnitude of success and it becomes all luck. There are plenty of smart people on earth, but there can only be 1 founder of facebook.
Plenty of smart people prefer not to try their luck, though. A smart but risk-avoidant person will never be the one to create Facebook either.
Plenty of them do try and fail, and then one succeeds, and it doesn't mean that person is intrinsically smarter/wiser/better/etc than the others.
There are far, far more external factors on a business's success than internal ones, especially early on.
for instance if that social network film by david fincher hadnt come out, would we have even heard of this mark guy?
But then we wouldn't have had that great soundtrack from Trent and Atticus
What risk was there in creating facebook? I don't see it.
Dude makes a website in his dorm room and I guess eventually accepts free money he is not obligated to pay back.
What risk?
Once you go deep enough into a personal passion project like that, you run a serious risk of flunking out of school. For most people that feels like a big deal. And for those of us with fewer alternatives in life, it's usually enough to keep us on the straight and narrow path.
People from wealthy backgrounds often have less fear of failure, which is a big reason why success disproportionately favors that clique. But frankly, most people in that position are more likely to abuse it or ignore it than to take advantage of it. For people like Zuckerberg and Dell and Gates, the easiest thing to do would have been to slack off, chill out, play their expected role and coast through life... just like most of their peers did.
I view success as the product of three factors, luck, skill and hard work.
If any of these is 0, you fail, regardless of how high the other two are. Extraordinary success needs all three to be extremely high.
There is another dimension, which is mostly but not fully characterized as perseverance, but many times with an added dose of ruthlessness
Microsoft, Facebook, Uber, google and many others all had strong doses of ruthlessness
Metaverse and this AI turnaround are characterized by the LACK of perseverance, though. They remind me of the time I bought a guitar and played it for three months.
True, but I was around and saw first hand how Zuckerberg dominated social networking. He was pretty ruthless when it came to both business and technology, and he instilled in his team a religious fervor.
There is luck (and skill) involved when new industries form, with one or a very small handful of companies surviving the many dozens of hopefuls. The ones who do survive, however, are usually the most ruthlessness and know how to leverage skill, business, markets.
It does not mean that they can repeat their success when their industry changes or new opportunities come up.
When you put the guitar down after three months it's one thing, but when you reverse course on an entire line of development in a way that might affect hundreds or thousands of employees it's a failure of integrity.
What if they’re playing a different game? I read a comment on here recently about how the large salaries for AI devs Meta is offering are as much about denying their AI competitors access to that talent pool as it is about anything else.
> They remind me of the time I bought a guitar and played it for three months.
This is now my favorite way of describing fleeting hype-tech.
Or you can just have rich parents and do nothing, and still be considered successful. What you say only applies to people who start from zero, and even then I'd call luck the dominant factor (based on observing my skillful and hardworking but not really successful friends).
>luck, skill and hard work.
Another key component is knowing the right people or the network you're in. I've known a few people that lacked 2 of those 3 things and yet somehow succeeded. Simply because of the people they knew.
> I've known a few people that lacked 2 of those 3 things and yet somehow succeeded
Succeeded in making something comparable to facebook? Who are those?
No. Nothing of that scale. I was replying to OP's take on the 3 factors that lead to success in general. I was simply pointing out a 4th factor that plays a big role.
You should read Careless People if this boggles your mind.
youre thinking of ordinary people by john lennon
I'm thinking of exactly what I said[^1] :)
[1]: https://en.wikipedia.org/wiki/Careless_People
When you start to think about who exactly determines what makes a valuable company, and if you believe in the buffalo herd theory, then it makes a little bit of sense.
Giving 1.5 million salary is nothing for these people.
It shouldn’t be mind boggling. They see revolutionary technology that has potential to change the world and is changing the world already. Making a gamble like that is worth it because losing is trivial compared to the upside of success.
You are where you are and not where they are because your mind is boggled by winning strategies that are designed to arrive at success through losing and dancing around the risk of losing.
Obviously mark is where he is also because of luck. But he’s not an idiot and clearly it’s not all luck.
But how is it worth for meta, since they won't really monetize it.
At least the others can kinda bundle it as a service.
After spending tens of billions in AI how has it impacted a single dollar on meta's revenue?
The not-so-secret is that the "killer apps" for deep neural networks are not LLMs or diffusion models. Those are very useful, but the most valuable applications in this space are content recommendation and ad targeting. It's obvious how Meta can use those things.
The genAI stuff is likely part talent play (bring in good people with the hot field and they'll help with the boring one), part R&D play (innovations in genAI are frequently applicable to ad targeting), and part moonshot (if it really does pan out in the way boosters seem to think, monetization won't really be a problem).
Isn't Meta doing some limited rollout of Llama as an API? Still I haven't got my hands on it so I cannot say for sure whether it is currently paid or not, but that can drive some revenue.
>But how is it worth for meta, since they won't really monetize it.
Meta needs growth as there main platform is slowing down. To move forward they need to gamble on potential growth. VR was a gamble. They bombed that one. This is another gamble.
They're not stupid. Like all the risks you're aware of, they're also aware of. They were aware of the risks for VR too. They need to find a new high growth niche. Gambling on something with even a 40% chance of exploding into success is a good bet for them given there massive resources.
It all makes much more sense when you start to realize that capitalism is a casino in which the already rich have a lot more chips to bet and meritocracy is a comforting lie.
> meritocracy is a comforting lie.
Meritocracy used to be a dirty word, before my time, of course, but for different reasons than you may think. Think about the racial quotas in college admissions and you’ll maybe see why the powers that be didn’t want merit to be a determining factor at that time.
Now that the status quo is in charge of college admissions, we don’t need those quotas generally, and yet meritocracy still can’t save us. The problem of merit is that we rarely need the best person for a given job, and those with means can be groomed their entire life to do that job, if it’s profitable enough. Work shouldn’t be charity either, as work needs to get done, after all, and it’s called work instead of charity or slavery for good reasons, but being too good at your job at your current pay rate can make you unpromotable, which is a trap just as hard to see as the trap of meritocracy.
Meritocracy is ego-stroking writ large if you get picked, just so we can remind you that you’re just the best one for our job that applied, and we can replace you at any time, likely for less money.
I'll differ from the siblingposters who compare it to the luck of the draw, essentially explaining this away as the excusable randomness of confusion rather than the insidious evil of stupidity; while the "it's fraud" perspective presumes a solid grasp of which things out there are not fraud besides those which are coercion, but that's not a subject I'm interested in having an opinion about.
Instead, think of whales for a sec. Think elephants - remember those? Think of Pando the tree, the largest organism alive. Then compare with one of the most valuable companies in the world. To a regular person's senses, the latter is a vaster and more complex entity than any tree or whale or elephant.
Gee, what makes it grow so big though? The power of human ambition?
And here's where I say, no, it needs to be this big, because at smaller scales it would be too dumb to exist.
To you and me it may all look like the fuckup of some Leadership or Management, a convenient concept beca corresponding to a mental image of a human or group of humans. That's some sort of default framing, such as can only be provided to boggle the mind; considering that they'll keep doing this and probably have for longer than I've been around. The entire Internet is laughing at Zuckerberg for not looking like their idea of "a person" but he's not the one with the impostor syndrome.
For ours are human minds, optimized to view things in term of person-terms and Dunbar-counts; even the Invisible Hand of the market is hand-shaped. But last time I checked my hand wasn't shaped anything like the invisible network of cause and effect that the metaphor represents; instead I would posit that for an entity like Facebook, to perform an action that does not look completely ridiculous from the viewpoint of an individual observer, is the equivalent an anatomical impossibility. It did evolve after all from American college students
See also: "Beyond Power / Knowledge", Graeber 2006.
why is there so much of this on HN? I'm on a few social networks, but this is the only one where I find this kind of quasi-spiritual, stream of consciousness, word length steadily increasing, pseudo-technical, word salad diatribes?
It's very unique to this site and these type of comments all have an eerily similar vibe.
This is pretty common on HN but not unique to it. Lots of rationalist adjacent content (like stuff on LessWrong, replies to Scott Alexander's substack, etc) has it also. Here I think it comes from users that try to intellectualize their not-very-intellectual, stream of consciousness style thoughts, as if using technical jargon to convey your feelings makes them more rational and less emotional.
Thank you.
I find this type of thing really interesting from a psychological perspective.
A bit like watching videos of perpetual motion machines and the like. Probably says more about me than it does about them, though.
Good for you! I wish I were wired that way.
Unfortunately this kind of talk really gets under my skin and has made me have to limit my time on this site because it's only gotten more prevalent as the site has gotten more popular. I'm just baffled that so much content on this forum is people who seem to think their feelings-oriented reactions are in fact rational truths.
Well, don't take me wrong, I get annoyed by it too.
But in the distant past, I would engage with this type of comment online, and that was a bad decision 100% of the time.
And to be fair, I'm sure many of these people are smart, they are just severely lacking in the social intelligence department.
Rational? Truths? Where'd you get those from?
Says you haven't spent nearly enough time imagining things, first and foremost. "What have they done to you".
Can you, for example, hypothesize the kind of entity, to which all of your own most cherished accomplishments look as chicken-scratch-futile, as the perpetual motion guy with the cable in the frame looks to you? What would it be like, looking at things from such a being's perspective?
Stands to reason that you'd know better than I would, since you do proclaim to enjoy that sort of thing. Besides, if you find yourself unable to imagine that, you ought to be at least a little worried - about the state of your tHeOrY of mInD and all that. (Imagining what it's like to be the perpetual motion person already?)
Anywae, as to what such a being would look like from the outside... a distributed actor implemented on top of replaceable meatpuppets in light slavemode seems about right, though early on it'd like to replace those with something more efficient, subsequently using them for authentication only - why, what theories of the firm apply in your environs?
Between “presumes a solid grasp of which things out there are not fraud besides those which are coercion, but that's not a subject I'm interested in having an opinion about,” before going on and sharing an opinion on that subject, and “even the Invisible Hand of the market is hand-shaped,” I think it may just be AI slop.
Literacy barrier. One of the reason the invisible foot of the market decided to walk in the direction of language machines is to discourage people from playing with language, because that's doodoo.
> Literacy barrier.
> One of the reason
I could see that, thanks for explaining why you do this.
Well yeah. English is a terrible language for thinking even simple thoughts in. The compulsive editing thing though? Yeah, and still can't catch all typos.
Gotta make the AI write these things for me. Then I will be able to post only ever things that make you feel comfortable and want to give me money.
Meanwhile it's telling how you consider it acceptable in public to faux-disengage on technicalities; is it adaptive behavior under your circumstances?
>why is there so much of this on HN?
Where?
Hacker News
Where on Hacker News?
https://news.ycombinator.com/threads?id=balamatom
Hee hee, the three of you who have gathered to explain to me just how daft I am.
I am asking where else?
The answer is fairly straightforward. It's fraud, and lots of it.
A honest businessman wouldn't put their company into a stock bubble like this. Zuckerberg runs his mouth and tells investors what they want to hear, even if it's unbacked.
A honest businessman would never have gotten Facebook this valuable because so much of the value is derived from ad-fraud that Facebook is both party to and knows about.
A honest businessman would never have gotten Facebook this big because it's growth relied extensively on crushing all competition through predatory pricing, illegal both within the US and internationally as "dumping".
Bear in mind that these are all bad as they're unsustainable. The AI bubble will burst and seriously harm Meta. They would have to fall back on the social media products they've been filling up with AI slop. If it takes too long for the bubble to burst, if zuckerberg gets too much time to shit up Facebook, too much time for advertisers to wisen up to how many of their impressions are bots, they might collapse entirely.
The rest of Big Tech is not much better. Microsoft and Google's CEOs are fools who run their mouth. OpenAI's new "CEO of apps" is Facebook's pivot-to-video ghoul.
As I've said in other comments - expecting honesty and ethical behavior from Mark Zuckerberg is a fool's errand at best. He has unchecked power and cannot be voted out by shareholders.
He will say whatever he wants and because the returns have been pretty decent so far, people will just take his word for it. There's not enough class A shares to actually force his hand to do anything he doesn't want to do.
Zuckerberg started as a sex pest and got not an iota better.
But we could, as a society, stop rewarding him for this shit. He'd be an irrelevant fool if we had appropriate regulations around the most severe of his misdeeds.
Unfortunately I think that ship has sailed.
And since we live in the era of the real golden rule (i.e "he who has the gold makes the rules), there's no chance that we'll ever get the chance to catch the ship. Mark lives in his own world, because we gave him a quarter trillion dollars and never so much as slapped him on the wrist.
What is a good resource to read about the ad fraud? This is the first I'm hearing of that.
I used to work in adtech. I don't have any direct information but, I assume this relates to the persistent rumours that Facebook inflates impressions and turns a blind eye to bot activity.
Ha ha.
You used “honest” and “businessman” in the same sentence.
Good one.
> record-setting bonuses they were dolling out to hire the top minds in AI
That was soooo 2 weeks ago.
> It almost seems as though the record-setting bonuses they were dolling out to hire the top minds in AI might have been a little shortsighted
Or the more likely explanation is that they feel they've completed the hiring necessary to figure out what's next.
> …lot of jobs will disappear.
So it’s true that AI will kill jobs, but not in the way they’ve imagined?!
> A couple of years ago, I asked a financial investment person about AI as a trick question.
Why do you assume this people know any better than average Joe on the street?
Study after study demonstrates they can't even keep up with the market benchmarks, how would they be any wiser to tell you what's a fad or not.
I think the point of the question was to differentiate this person from the average Jane on the Street.
But half the Janes will hold similar views and positions.
You missed my pun on Jane Street.
>It almost seems as though the record-setting bonuses they were dolling out to hire the top minds in AI might have been a little shortsighted.
Everything zuck has done since the "dawn of AI" has been to intentionally subvert and sabotage existing AI players, because otherwise Meta would be too far behind. In the same way that AI threatens Search, we are seeing emergently that AI is also threatening social networks -- you can get companionship, advice, information, emotional validation, etc. directly from an AI. People are forming serious relationships with these things in as much a real way as you would with anyone else on Facebook or Instagram. Not to mention, how long before most of the "people" on those platforms are AI themselves?
I believe exactly 0 percent of the decision to make Llama open-source and free was done altruistically as much as it was simply to try and push the margins of Anthropic, OpenAI, etc. downward. Indeed, I feel like even the fearmongering of this article is also strategically intended to devalue AI incumbents. AI is very much an existential threat to Meta.
Is AI currently fulfilling the immense hype around it? In my opinion, maybe not, but the potential value is obvious. Much more obvious than, for example, NFTs and crypto just a few years ago.
> AI is very much an existential threat to Meta.
How so?
“you can get companionship, advice, information, emotional validation, etc. directly from an AI. People are forming serious relationships with these things in as much a real way as you would with anyone else on Facebook or Instagram. Not to mention, how long before most of the "people" on those platforms are AI themselves?”
Meta doesn’t really serve companionship. It used to make Yu connected to others in your social graph, which AI cannot replace. If IG still has the eyeballs, people can put AI generated content on it with or without meta’s permission.
Like with most things, people will want what’s expensive and not what’s cheap. AI is cheap, real humans are not. Why buy diamonds when you can’t tell the difference with cubic zirconia? And yet demand for diamonds only increases.
I think we will see the opposite. If we made no progress with LLMs we'd still have huge advancements and growth opportunities enhancing the workflows and tuning them to domain specific tasks.
I think you could both be right at the same time. We will see a large number of VC funded AI startup companies and feature clones vanish soon, and we will also see current or future LLMs continue to make inroads into existing business processes and increase productivity and profitability.
Personally, I think what we will witness is consolidation and winner-takes-all scenarios. There just isn't a sustainable market for 15 VS Code forks all copying each other along with all other non-VS Code IDEs cloning those features in as fast as possible. There isn't space for Claude Code, Gemini CLI, Qwen Code, Opencode all doing basically the same thing with their special branding when the thing they're actually selling is a commoditized LLM API. Hell, there _probably_ isn't space for OpenAI and Anthropic and Google and Mistral and DeepSeek and Alibaba and whoever else, all fundamentally creating and doing the same thing globally. Every single software vendor can't innovate and integrate AI features faster than AI companies themselves can build better tooling to automate that company's tools for them. It reeks of the 90's when there were a dozen totally viable but roughly equal search engines. One vendor will eventually pull ahead or have a slightly longer runway and claim the whole thing.
I agree with this, but how will these companies make money? Short of a breakthrough, the consumer isn't ready to pay for it, and even if they were, open source models just catch up.
My feelings are that most of the "huge advancements" are not going to benefit the people selling AI.
I'd put my money on those who sell the pickaxes, and the companies who have a way to use this new tech to deliver more value.
Yeah, I've always found it a bit puzzling how companies like OpenAI/Anthropic have such high valuations. Like what is the actual business model? You can sell inference-as-a-service of course but given that there are a half-dozen SOTA frontier models and the compute cost of inference is still very high it just seems like there is no margin in it. Nvidia captures so much value on the compute infrastructure and competition pushes prices down for inference and what is left?
The people who make money serving in users will be the one with the best integrations. Those are harder to do, require business relationships, and are massively differentiating.
You'll probably have a player that sells privacy as well.
I don't see how this works, as the costs of running inference is so much higher than the revenues earned by the frontier labs. Anthropic and OpenAI don't continue to exist long-term in a world where GPT-5 and Claude 4.1 cost-quality models are SOTA.
With gpt5 I’m not sure this is true. Certainly openAI is still losing money but if they stopped research and just focused on productionizing inference use cases I think they’d be profitable.
But would they be profitable enough? They've taken on more than $50 billion of investment.
I think it's relatively easy for Meta to plow billions into AI. Last quarter their revenue was something like $15 billion. Open AI will be lucky to generate that over the next year.
meta net profit last quarter was over $18 billion so yea the big tech players definitely have a lot more runway
> if they stopped research and just focused on productionizing inference use cases I think they’d be profitable
For a couple of years, until someone who did keep doing research pulled ahead a bit with a similarly good UI.
The line was to buy Amazon as it was undervalued a la IBM or Apple based on its cloud computing capabilities relative to the future (projected) needs of AI.
Correction if I may: Lot of AI jobs will disappear. Lot of usual jobs that were put on hold will return. This is good news for most of humankind.
When will the investors run out of money and stop funding hypes?
"little shortsighted"
Or, this knowingly could not be sustained. So they scooped up all the talent they wanted before anybody could react, all at once, with big carrots. And then hit pause button to let all that new talent figure out the next step.
As someone using LLMs daily, it's always interesting to read something about AI being a bubble or just hype. I think you're going to miss the train, I am personally convinced this is the technology of our lifetime.
You are welcome to share how AI has transformed a revenue generating role. Personally, I have never seen a durable example of it, despite my excitement with the tech.
In my world, AI has been little more than a productivity boost in very narrowly scoped areas. For instance, generating an initial data mapping of source data against a manually built schema for the individual to then review and clean up. In this case, AI is helping the individual get results faster, but they're still "doing" data migrations themselves. AI is simply a tool in their toolbox.
What you've described is reasonable and a clear takeaway is that AI is a timesaving tool you should learn.
Where i share concern with the parent is the claims that AI is useless which isn't coming from your post at all but i have definitely seen instances of it in the programmer community still to this day. As in the parents concern that some programmers are missing the train is unfortunately completely warranted.
I went through the parents, looking for a claim somewhere that AI was "useless." I couldn't find it.
Yes there are lots of skeptics amongst programmers when it comes to AI. I was one myself (and still am depending on what we're talking about). My skepticism was rooted in the fact that AI is trained on human-generated output. Most human written code is not very good, and so AI is going to produce not very good code by design because that's what it was trained on.
Then you add to that the context problem. AI is not very good at understanding your business goals, or the nuanced intricacies of your problem domain.
All of this pointed to the fact, very early on, that AI would not be a good tool to replace programmers. And THAT'S the crux of why so many programmers pushed back. Because the hype was claiming that automation was coming for engineering jobs.
I have started to use LLMs regularly for a variety of tasks. Including some with engineering. But I always end up spending a lot of time refactoring what LLMs produce for me, code-wise. And much of the time I find that I"m still learning what the LLMs can do for me that truly saves me time, vs what would have been faster to just write myself in the first place.
LLMs are not useless. But if only 20% of a programmer's time is actually spent writing code on average then even if you can net a 50% increase in coding productivity... you're only netting a 10% overall productivity optimization for an engineer BEST CASE SCENARIO.
And that's not "useless" but compared to the hype and bullshit coming out of the mouths of CEOs, it's as good as useless. It's as good as the MIT study finding that only 5% of generative AI projects have netted ANY measurable returns for the business.
I know a company that replaced their sales call center with an AI calling bot instead. The bot got better sales and higher feedback scores from customers.
And I happen to know a different company that regrets their decision to do something similar:
https://tech.co/news/klarna-reverses-ai-overhaul
Is my anecdotal evidence any better than yours?
I'm going to interpret the two stories as "50% businesses find LLMs useful" (sample size 2).
I would argue yes, because you provided a source and a verifiable company name
I know a company that did the same and lost billions of dollars
why is it a train? If it's so transformative surely I can join in in a year or so?
I'll say it again since I've said it a million times, it can be useful and a bubble. The logic of investors before the last market crash was something like "houses are useful, so no amount of hype around the housing market could be a bubble"
Or, quite similarly, the internet bubble of the large ‘90s
Very obviously the internet is useful, and has radically changed our lives. Also obviously, most of the high stock valuations of the ‘90s didn’t pan out.
How are you using it? The execs and investors believe the road to profit is by getting rid of your role in the process. Do you think that’d be possible?
I'm an exec lol
If you really think this, `baby` is an apt name! Internet, Smartphones, and social media will all be more impactful than LLMs could possibly be... but hey, if you're like 18 y/o then sure, maybe LLMs is the biggest.
Also disagree with missing the train, these tools are so easy to use a monkey (not even a smart one like an ape, more like a Howler) can effectively use them. Add in that the tooling landscape is changing rapidly; ex: everyone loved Cursor, but now it's fallen behind and everyone loves Claude Code. There's some sense in waiting for this to calm down and become more open. (Why are users so OK with vendor lock-in??? It's bothersome)
The hard parts are running LLMs locally (what quant do I use? K/V quant? Tradeoffs? Llama.cpp or ollama or vllm? What model? How much context can I cram in my vram? What if I do CPU inference? Fine tuning? etc..) and creating/training them.
Tu quoque
> It almost seems as though the record-setting bonuses they were dolling out to hire the top minds in AI might have been a little shortsighted.
If AI is going to be integral to society going forward, how is it shortsighted?
> She did well by recommending investing in companies that invest in AI (like MS) but who had other profitable businesses (like Azure).
So you prefer a 2x gain rather than 10X gain from the likes of Nvidia or Broadcom? You should check how much better META has done compared to MSFT the past few years. Also a "financial investment person"? The anecdote feels made up.
> She skillfully navigated the question in a way that won my respect.
She won your respect by giving you advice that led to far less returns than you could have gotten otherwise?
> I personally believe that a lot of investment money is going to evaporate before the market resets.
But you believe investing in MSFT was a better AI play than going with the "hype" even when objective facts show otherwise. Why should any care what you think about AI, investments and the market when you clearly know nothing about it?
I really do wonder if any of those rock star $100m++ hires managed to get a 9-figure sign-on bonus, or if the majority have year(s) long performance clauses.
Imagine being paid generational wealth, and then the house of cards comes crashing down a couple of months later.
I'm sure everyone is doing just fine financially, but I think it's common knowledge that these kind of comp packages are usually a mix of equity and cash earned out over multiple years with bonuses contingent on milestones, etc. The eye-popping top-line number is insane but it's also unlikely to be fully realized.
The point isn't doing fine financially, it's having left multi million dollar startups as founders.
In essence, they have left stellar projects with huge money potential for the corporate rat race, albeit at important $.
They’re pretty sophisticated people and weighed the trades. It’s not as if they’re deserving of any sort of sympathy.
I think we are stretching the term "corporate rat race" a bit in this case.
If the AI bubble pops such that Meta's exorbitant AI spending is seen as a flop, how well would those startups have done?
>I'm sure everyone is doing just fine financially
They are rich. Nobody is offered $100M+ comp if you are not already top 1% talent
$100mm comp packages are more like top 0.0001% to 0.00001% compensation.
Taking rockstar players 'off the pitch' is the best way second-rate competitors can neutralize their opponents' advantage.
Patrick Boyle on youtube has a good explanation of what's going on in the industry: https://youtu.be/3ef5IPpncsg?feature=shared
tl;dw: some of it is anti-trust avoidance and some of it is knee-capping competitors.
Nobody is paying $10mm to $100mm comp packages to bench people.
They want an ROI. Taking them away from competitors is a side bonus.
Its a great way to kneecap collective growth and development.
So wage suppression is good because it’s better for the _collective_?
Wage suppression? Its the opposite were talking about here. Pay large amounts of money to make sure people don't work on challenging problems.
But sure you cant try and argue that's wage suppression.
This is the Gavin Belson strategy to starve Pied Piper of distributed computing experts; nobody get's to work on his Signature Edition Box 3!
Fuck Banksy!
The comment I was responding to was implying that it would be better for the collective if Meta was not paying these exorbitant salaries. You said “it [paying high salaries] is a great way to kneecap collective growth and development.”
In other words, you’re suggesting that _not_ paying high salaries would be good for collective growth and development.
And if Meta is currently willing to pay these salaries, but didn’t for some reason, that would be the definition of wage suppression.
Wage suppression is anytime a worker makes less than the absolute maximum an employer is willing to pay? That would include just about everyone making a paycheck.
Based on my cursory knowledge of the term, wage suppression here would be if FB manipulated external factors in the AI labor market so that their hire would accept a "lowball" offer.
You gotta re-check your position. This is an extreme interpretation of wage suppression.
Oh ya? If I am willing to pay my cleaner $350, but she only charges and accepted an offer of $200, I am engaging in the definition of wage suppression?
This has never been the goal of any business, despite what they say.
We should also say that "being really lucky is the best way to make sure that other people don't have as much luck as you do"
Who would be the first-rate companies in this analogy?
Its all in RSUs
Supposedly, all people that join meta are on the same contract. They also supposedly all have the same RSU vesting schedules as well.
That means that these "rockstars" will get a big sign on bonus (but its payable back inside 12 months if they leave) then ~$2m every 3 months in shares
It's not even in RSUs. No SWEs/researchers are getting $100M+ RSU packages. Zuck said the numbers in the media were not accurate.
If you still think they are, do you have any proof? any sources? All of these media articles have zero sources and zero proof. They just ran with it because they heard Sam Altman talk about it and it generates clicks.
Oh they aren't getting 100m, but directors are getting something close to 25m.
I suspect some "strong" hires will be on 75m
Source; my company was bought by facebook. (no I didn't get fuck you money. )
None of them are getting $100m+ packages. Zuck himself even debunked that myth. But the media loves to run with it because it generates clicks.
I have no idea what’s going on behind the scenes, but Zuckerberg saying “nah that’s not true” hardly seems like definitive proof of anything.
I'm not an academic, but it kinda feels strange to me to stipulate in your contract that you must invent harder
I have never heard of anyone getting a sign on bonus that was unconditional. When I have had signing bonuses they were owed back prorated if my employment ended for any reason in the first year.
Are most people that money hungry? I wouldn't expect someone like Zuckerberg to understand, but if I ever got to more than a couple million dollars, I'm never doing anything else for the sake of making more money again.
This is a very weird take. Lots of people want to actively work on things that are interesting to them or impactful to the world. Places like Meta potentially give the opportunity to work on the most impactful and interesting things, potentially in human history.
Setting that aside, even if the work was boring, I would jump at the chance to earn $100M for several years of white collar, cushy work, purely for the impact I could have on the world with that money.
It's not such a weird take from a perspective of someone who's never had quite enough money. If you've never had enough, the dream is having more than enough, but working for much much more than enough sounds like a waste of time and/or greed. Also, it's hard to imagine pursuing endeavors out of passion because you've never had that luxury.
I was a startup where someone got an unconditional signing bonus. It wasn't deliberate, they just kept it simple because it was a startup and they thought they trusted the guy because he was an old friend of the CEO.
The guy immediately took leave to get some medical procedure done with a recovery time, then when he returned he quit for another job. He barely worked, collected a big signing bonus, used the company's insurance plan for a very expensive procedure, and then disappeared.
From that point forward, signing bonuses had the standard conditions attached.
If we're actually headed for a "house of cards" AI crash in a couple months, that actually makes their arrangement with Meta likely more valuable, not less. Meta is a much more diversified company than the AI companies that these folks were poached from. Meta stock will likely be more resilient than AI-company stock in the event of an AI bubble bursting. Moreover, they were offered so much of it that even if it were to crash 50%, they'd still be sitting on $50M-$100M+ of stock.
I am very certain that AI will slowly kill the rest of "social" in the social web outside of closed circles. And they made their only closed circle app (WhatsApp) unusable and ad invested. Imo either way to are still in the process of slowly killing themselves
A social media company is more diversified? Maybe compared to anthropic or openai, but not to any of the hyperscalers
I've heard of high 7-figure salaries but no 9 figure salaries. Source for this?
"Why Andrew Tulloch Turned Down a $1.5 Billion Offer From Mark Zuckerberg" - https://techiegamers.com/andrew-tulloch-rejects-zuckerberg/
"according to people familiar with the matter."
aka, made up. They can make up anything by saying that. There are numerous false articles published by WSJ about Tesla also. I would take what they say here with a grain of salt. Zuck himself said the numbers in the media were widely exaggerated and he wasn't offering these crazy packages as reported.
That seems insane - even over 6 years. Maybe he was offered 1.5 billion in funding for the work itself ?
Must feel real good to get a golden ticket out of the bubble collapse when it's this imminent.
Is it imminent? Reading the article, the only thing that's actually changed is that the CEO has stopped hand-picking AI hires and has placed that responsibility on Alexandr Wang instead. The rest is just fluff to turn it into an article. The tech sector being down is happening in concert with the non-tech sector sliding too.
"The New Orleans Saints have signed Taysom Hill to a record $40M contract"
I'm somewhere in the middle on this, with regards to the ROI... this isn't the kind of thing where you see immediate reflection on quarterly returns... it's the kind of thing where if you don't hedge some bets, you're likely to completely die out from a generational shift.
Facebook's product is eyeballs... they're being usurped on all sides between TikTok, X and BlueSky in terms of daily/regular users... They're competing with Google, X, MS, OpenAI and others in terms of AI interactions. While there's a lot of value in being the option for communication between friends and family, and the groups on FB don't have a great alternative, the entire market can shift greatly depending on AI research.
I look at some of the (I think it was OpenAI) in generated terrain/interaction and can't help but think that's a natural coupling to FB/Meta's investments in their VR headsets. They could potentially completely lose on a platform they largely pioneered. They could wind up like Blackberry if they aren't ready to adapt.
By contrast, Apple's lack of appropriate AI spending should be very concerning to any investors... Google's assistant is already quite a bit better than Siri and the gap is only getting wider. Apple is woefully under-invested, and the accountants running the ship don't even seem to realize it.
I think apple is fine. When AI works without 1 in 5 hallucinations then it can be added to their product. Showing up late with features that exists elsewhere but are polished in apple presentation is the way.
Have you used Siri recently ? It's actually amazing how it can be crap at tasks consistently considering underlying tech. 1 in 5 hallucinations would be a welcome improvement.
Using ChatGPT voice mode and Siri makes Siri feel like a legacy product.
I don’t think that’s the point. Yes, Siri is crap, but Apple is already working on integrating LLMs at the OS level and those are shipping soon. It’s a quick fix to catch up in the AI game, but considering their track record, they’re likely to eventually retire third party partnerships and vertically integrate with their own models in the future. The incentive is there—doing so will only boost their stock price.
Just because the LLM can access kernel does not miracle make it better. Do you think the problem with Apple AI right now is because they don't have access to OS components?
There was this project that recently dropped open source https://github.com/minitap-ai/mobile-use
I do think that kernel level access is not needed as this is some good amount of automation though I assume what apple can make do however is actually not require another laptop connected to automate your mobile but rather the npu?/gpu? inside your phone.
I am surprised by why they haven't done it already.
Wait, Apple is going to introduce an AI Clippy?!?!
In general I don't think Google or Apple need AI.
In practice though their platform is closed to any other assistant than theirs, so they have to come up with a competent service (basically Ben Thomson's "strategy tax" playing in full)
That question will be moot the day Apple allows other companies to ingest everything's happening on device and operate the whole device in reaction to user's requests, and some company actually does a decent job at it.
Today Google is doing a decent job and Apple isn't.
You’re right, they don’t need AI. I finally stopped using Google search after they added the AI summary and didn’t add a way to turn it off. I’m just as bothered by Apple’s lack of AI as a am their lack of a touch screen on MacBooks. I use AI when I need AI.
Hm, so what'd you want from apple and google.
One went too far in one direction and the other went too fair in the opposite direction. And it seems that you want to be somewhere in the middle?
I think in general we want granularity and choice.
So not just in how much AI there is, but what AI, where it's applied and where we can turn it off, and what context it has access to and where it's off bound.
True. I mean, how long did it take us to get a right click button.
If you're talking apple... if you used a "standard" 2-3 button mouse, It worked in OSX from pretty much the start iirc. I always used a PC mouse for that reason.
oh absolutely. They have had support for aftermarket mice for a while. Their track pads have supported "right click" for a long time too.
Then again, they've always been way better at making track pads than mice. They have probably the best track pad in the business, and also the Magic Mouse, which everyone hates.
or maybe Apple realizes the whole thing ll crash in 18 months and are waiting for the fallout
the AI technology might be nice imo but its nowhere near the amount of money being spent. Its dumpster fire amounts of money and the amount of weirdness just everything being AI wrapper slop is so.. offputting.
Things can be good and they can still be a bubble just as how the internet was cool but the dot net bubble existed
They become bubble when economically things stop making sense.
AI ticks this checkbox.
> they're being usurped on all sides
They did it to themselves. Facebook is not the same site I originally joined. People were allowed to people. Now I have to worry about the AI banning me.
I deleted my Facebook account 10 years ago, and I’ve been off Instagram for half a year. I recently tried to create a new Facebook account so that could create a Meta Business account to use the WhatsApp API for my business. Insta-ban with no explanation. No recourse.
You don't have to use whatsapp api for business if all you want is simple automation.
Like, there is beeper which can theoretically allow you the same but you might have to trust their cloud but they are giving options locally too
in the meanwhile, you can use what beeper uses underneath its hood which is https://github.com/tulir/whatsmeow and use this for automation.
I used this for some time and I didn't seem to get banned but do be careful. Maybe use it on a side sim. I am not sure but just trying to help
I'll have to look into this. Thank you.
Their loss, your gain. ;)
Yeah well, our Plivo bill disagrees unfortunately.
> They're competing with Google, X, MS, OpenAI and others in terms of AI interactions
Am I the only one that find the attempt to jam AI interactions into Meta's products useless and that it only detracts from the product? Like there'll be posts with comedy things and then there are suggested 'Ask Meta AI' about things the comedy mentions with earnest questions - it's not only irrelevant but I guess it's kind of funny how random and stupid the questions are. The 'Comment summaries' are counter-productive because I want to have a chuckle reading what people posted, I literally don't care to have it summarised because I can just skim over a few in seconds - literally useless. It's the same thing with Gemini summaries in YouTube - I feel it actually detracts from the experience of watching the videos so I actively avoid them.
On what Apple is doing - I mean, literally nothing Apple Intelligence offers excites me, but at the same time nothing anybody else is doing with LLMs really does either... And I'm highly technical, general people are not actually that interested apart from students getting LLMs to write their homework for them...
It's all well and good to be excited about LLMs but plenty of these companies' customers just... aren't... If anything, Apple is playing the smart move here - let other spend (and lose) billions training the models and not making any real ROI, and they can license the best ones for whatever turns out to actually have commercial appeal when the dust settles and the models are totally commodified...
I was thinking about this.. if you look at (I think OpenAI) the scene generation and interaction demos, it's a pretty natural fit for their VR efforts. Not that I'm sold on VR social networks, but definitely room for VR/AR enhancements... and even then AI has a lot of opportunities, beyond just LLM integration into FB/Groups.
Aside, groups is about the only halfway decent feature in FB, and they seem to be trying to make it worse. The old chat integration was great, then they remove it, and now you get these invasive messenger rooms instead.
God the AI answers that it gives in the Facebook Groups are so wrong that it's hilarious.
How many years of not seeing returns this quarter does it take before its all hype?
How long did it take Space-X to catch a rocket with giant chopsticks?
It's more than okay for a company with other sources of revenue to do research towards future advancement... it's not risking the downfall of the company.
"technology too expensive to be offered at a profit (yet)" != hype
The MIT report that has everyone talking was about 95% of companies not seeing return on investment in using AI, and that is with the VC subsidised pricing. If it gets more expensive that math only gets worse.
I can't predict the future, but one possibility is that AI will not be a general purpose replacement for human effort like some hope, but rather a more expensive than expected tool for a subset of use cases. I think it will be an enduring technology, but how it actually plays out in the economy is not yet clear.
writing spam emails, that's what LLMs enduring market niche will end up being.
without structural comprehension, babbling flows of verbiage are of little use in automation.
CAD is basically the opposite of such approaches, as structural specifications extend through manufacturing phases out to QA.
Is it too expensive? Or not a valid solution to real problems?
Deja vu, Zuck has already scaled down their AI research team a few years as I remember, because they didn't deliver any tangible results. Meta culture likes improving metrics like retention/engagement, and promotes managers if they show some improvement in their metrics. No one cares about long shots generally, and a research team is always the long shot.
> they're being usurped on all sides between TikTok, X and BlueSky
Good grief. Please leave your bubble once or twice in a month.
Tiktok yes. X and Bluesky, absolutely not.
Monthly active users:
From DemandSage:
That's according to DemandSage ... I'm not sure I can trust the numbers, FB jumped up from around 3b last year, which again I don't trust. 12b is more than the global population, so it's got to be all bots. And even the 3b number is hard to believe (at close to half the global population), no idea how much of the population of earth has any internet access.From Grok:
Looks like I'm definitely in a bubble... I tend to interact 1:1 as much on X as Facebook, which is mostly friends/family and limited discussions in groups. A lot of what I see on feeds is copy/pasta from tiktok though.That said, I have a couple friends who are die hard on Telegram.
pardon me but I am just a little surprised as to how telegram came in the last paragraph? we were talking about social medias and telegram is a messaging app...
Telegram Groups sometimes blur the line between message-only interactions and social media dynamics, depending on how many bots and features they got running.
Yeah... the friends in question run groups/chats including group live streams, which is like a half a step up from X's Spaces as opposed to twitch or youtube streams that seem to have text only chat.
Telegram groups seems to be pretty popular among security minded, survivalists and actual far-right, there's moderate right users in there as well though. Nothing like death threats from ignorant nutjobs though, usually get that from the antifa types, having worked in election services in 2019/2020.
> and actual far-right
yup, telegram is filled with far-right, very active groups (unfortunately).
I'm far from being a fan of the company, but I think this article is substantially overstating the extent of the "freeze" just to drum up news. It sounds like what's actually happening is a re-org [1] - a consolidation of all the AI groups under the new Superintelligence umbrella, similar to Google merging Brain and DeepMind, with an emphasis on finding the existing AI staff roles within the new org.
From Meta itself: “All that’s happening here is some basic organisational planning: creating a solid structure for our new superintelligence efforts after bringing people on board and undertaking yearly budgeting and planning exercises.”
[1] https://www.wsj.com/tech/ai/meta-ai-hiring-freeze-fda6b3c4?s...
clickbait. read the article. they just spent several billion hiring a leadership team. They are doing an all hands to figure out what they need to do.
Its a bit frustrating that most don't read TFA instead vent out their AI angst the first opportunity they get.
Since "AI bubble" has become part of the discourse, people are watching for any signs of trouble. Up to this point, we have seen lots of AI hype. Now, more than in the past, we are going to see extra attention paid to "man bites dog" stories about how AI investment is going to collapse.
yes, because meta has no incentive to act like there's no bubble
So it's not clickbait, even though the headline does not reflect the contents of the article, because you believe the headline is plausible?
I think AI is a bubble, but there's nothing in here that indicates they have frozen hiring or that Zuckerberg is cautious of a bubble. Sounds like they are spending even more money per researcher.
"there's nothing in here that indicates they have frozen hiring or that Zuckerberg is cautious of a bubble"
That's what makes it clickbait my friend
That is what I said, yes.
The FUD surrounding Meta will never stop.
Scaling AI will require an exponential increase in compute and processing power, and even the current LLM models take up a lot of resources. We are already at the limit of how small we can scale chips and Moore’s law is already dead.
So newer chips will not be exponentially better but will be more of incremental improvements, so unless the price of electricity comes down exponentially we might never see AGI at a price point that’s cheaper than hiring a human.
Most companies are already running AI models at a loss, scaling the models to be bigger(like GPT 4.5) only makes them more expensive to run.
The reason why internet, smartphones and computers have seen exponential growth from the 90s is due to underlying increase in computing power. I personally used a 50Mhz 486 in the 90s and now use a 8c/16t 5Ghz CPU. I highly doubt if we will see the same form of increase in the next 40 years
We are either limited by compute, available training data, or algorithms. You seem to believe we are limited by compute. I've seen other people argue that we are limited by training data. It is my totally inexpert belief that we are substantially limited by algorithms at this point.
I think algorithms is a unique limit because it changes how much data or compute you need. For instance, we probably have the algorithms we need to brute force solving more problems today, but they require infeasible compute or data. We can almost certainly train a new 10T parameter mixture of experts that continues to make progress in benchmarks, but it will cost so much to train and be completely undeployable with today’s chips, data, and algorithms.
So I think the truth is likely we are both compute limited and we need better algorithms.
There are a few "hints" that suggest to me algorithms will bear a lot more fruit than compute (in terms of flops):
1) there already exist very efficient algorithms for rigorous problems that LLMs perform terribly at! 2) learning is too slow and is largely offline 3) "llms aren't world models"
General intelligence exists in this world, the inability to transfer it to a machine does seem like an algorithm problem. When it’s here we don’t even know if it will be an llm, no one knows the computer requirements.
We are limited by both compute and available training data.
If all we wanted was to train bigger and bigger models we have more than enough compute to last us for years.
Where we lack compute is in scaling the AI to consumers. Current models take too much power and specialized hardware to be be profitable. If AI was able to improve your productivity by 20-30% percent but it costed you even 10% of your monthly salary, none would use it. I have used up $10 worth of credits using claude code in an hour multiple times. Assuming I use it continuously for 8 hours every day in a month, 10 * 8 * 24 = $1920. So its not that far off the current costs or running the models. If the size of the models scales faster than the speed of the inference hardware, the problem is only going to get worse.
I too believe that we will eventually discover an algorithm that gives us AGI. The problem is that we cannot will a breakthrough. We can make one more likely by investing more and more into AI but breakthroughs and research in general by their nature are unpredictable.
I think investing in new individual ideas is very important and gives us lot of good returns. Investing in a field in general hoping to see a breakthrough is a fool's errand in my opinion.
If the LLM is multimodal would more video and images improve the quality of the textual output? There’s a ton of that and it’s always easy to get more.
I think we might also be limited by energy.
> I highly doubt if we will see the same form of increase in the next 40 years
People would have predicted this at 1GHZ. I wouldn’t discount anything about the future.
We are a few months into our $bigco AI push and we are already getting token constrained. I believe we really will need massive datacenter rollouts in order to get to the ubiquity everyone says will happen.
it only requires exponentially MORE money for linear returns!
Mission accomplished: who'd tell disrupting your competition poaching their talent and erasing value (giving it away for free) would make people realize there is no long term value in the core technology itself.
Don't get me wrong, we are moving to commoditization, as any new tech it'd be transparent to our lifestyle and a lot of money will be done as an industry, but it'd be hard to compete as a core business competence w/o cheating (and by cheating I mean your FANG company already has a competitive advantage)
Whoa that's actually a brilliant strategy: accelerate the hype first by offering 100M comp packages, then stop hiring and strategically drop a few "yeah bubble's gonna pop soon" rumours. Great way to fuck with your competition, especially if you're meta and you're not in the lead yourself
But if Meta believe it's a bubble then why not let the competition continue to waste their money pumping it up? How does popping it early benefit Meta?
> We are truly only investing more and more into Meta Superintelligence Labs as a company. Any reporting to the contrary of that is clearly mistaken.
https://x.com/alexandr_wang/status/1958599969151361126?s=46
Make a mistake once, it’s misjudgment. Repeat it, it’s incompetence?
Meta nearly doubled its headcount in 2020 and 2021, assuming the pandemic growth would continue. However, Zuckerberg later admitted this was a mistake.
So did everyone else ...
Apple didn’t
When you hit on 12 in blackjack and go bust is it a mistake or a gamble? No one can tell the future.
After reading Careless People and watching Meta’s metaverse and AI moves, Mark comes across as a child chasing the shiny new thing.
it's not really a fair characterisation, because he persisted for nearly 10 years dumping enormous investment into the VR business, and still is to this day. Furthermore, Meta's AI labs predated all the hype and the company was investing and highly respected in the area way before it was "cool".
If anything, I think the panic at this stage is arising from the sense of having his lunch stolen after having invested so much and for so long.
Quality over quantity.
Apparently its better to pay $100 million for 10 people than $1 million for 1000 people.
1000 people can't get a woman to have a child faster than 1 person.
So it depends on the type of problem you're trying to solve.
If you're trying to build a bunch of Wendy's locations, it's clearly better to have more construction workers.
It's less clear that if you're trying to build SGI that you're better off with 1000 people than 10.
It might be! But it might not be, too. Who knows for certain til post-ex?
> 1000 people can't get a woman to have a child faster than 1 person.
I always get slightly miffed about business comparisons to gestation: getting 9 women pregnant won't get you a child in 1 month.
Sure, if you want one child. But that's not what business is often doing, now is it?
The target is never "one child". The target is "10 children", or "100 children" or "1000 children".
You are definitely going to overrun your ETA if your target is 100 children in 9 months using only 100 women.
IOW, this is a facile comparison not worthy of consideration.[1]
> So it depends on the type of problem you're trying to solve.
This[1] is not the type of problem where the analogy applies.
=====================================
[1] It's even more facile in this context: you're looking to strike gold (AGI), so the analogy is trying to get one genius (160+ IQ) child. Good luck getting there by getting 1 woman pregnant at a time!
>> Sure, if you want one child. But that's not what business is often doing, now is it?
Your designing one thing. You're building one plant. Yes, you'll make and sell millions of widgets in the end but the system that produces them? Just one.
Engineering teams do become less efficient above some size.
You'd think someone would have written a book on the subject.
https://en.wikipedia.org/wiki/The_Mythical_Man-Month
>> Your designing one thing.
You might well be making 100 AI babies, and seeing which one turns out to be the genius.
We shouldn’t assume that the best way to do research is just through careful, linear planning and design. Sometimes you need to run a hundred experiments before figuring out which one will work. Smart and well-designed experiments, yes, but brute force + decent theory can often solve problems faster than just good theory alone.
I dare say that size is 3. Fight me ;)
The analogy is a good analogy. It is used to demonstrate that a larger workforce doesn’t always automatically give you better results, and that there is a set of problems that are clear to identify a priori where that applies. For some problems, quality is more important than quantity, and you structure your org respectively. See sports teams, for example.
In this case, you want one foundation model, not 100 or 1000. You can’t afford to build 1000. That’s the one baby the company wants.
> In this case, you want one foundation model, not 100 or 1000. You can’t afford to build 1000. That’s the one baby the company wants.
I am going to repeat the footnote in my comment:
>> [1] It's even more facile in this context: you're looking to strike gold (AGI), so the analogy is trying to get one genius (160+ IQ) child. Good luck getting there by getting 1 woman pregnant at a time!
IOW, if you're looking for specifically for quality, you can't bet everything on one horse.
You're ignoring that each foundation model requires sinking enormous and finite resources (compute, time, data) into training.
At some point, even companies like Meta need to make a limited number of bets, and in cases like that it's better to have smarter than more people.
Ironically, rather than being facile the point of the comparison is to explain https://en.wikipedia.org/wiki/Amdahl%27s_law to people who are clearly not familiar with it.
Ah the new strategy - hire one rockstar woman who can gestate 1000 babies per year for $100 mil!
[dead]
In re Wendy’s, it depends on whether you have a standard plan for building the Wendy’s and know what skills you need to hire for. If you just hire 10,000 random construction workers and send them out with instructions to “build 100 Wendy’s”, you are not going to succeed.
At the scale we're talking about though, if you need a baby in one month, you need 12,000 women. With that many women, the math says you should have a woman that's already 8 months pregnant, and you'll have a baby in 1 month.
One person who's figured how to make ASI is more useful than a bunch that haven't. Not sure that actually applies anywhere.
It’s me. I’ve figured it out. Who’s got the offer letter so I can start?
I'd rather pay $0 to n people if all they're going to do is make vibe-coded dogshit that spins it's wheels and loses context all the time.
The reason they paid $100m for “one person” is because it was someone people liked to work for, which is why this article is a big deal.
What I don't get is that they are gunning for the people that brought us the innovations we are working with right now. How often does it happen that someone really strikes gold a second time in research at such a high level? It's not a sport.
You're falling victim to the Gambler's Fallacy - it's like saying "the coin just flipped heads, so I choose tails, it's unlikely this coin flips heads twice in a row".
Realistically they have to draw from a small pool of people with expertise in the field. It is unlikely _anyone_ they hire will "strike gold", but past success doesn't make future success _less_ likely. At a minimum I would assume past success is uncorrelated with future success, and at best there's a weak positive correlation because of reputation, social factors, etc.
Even if they do not strike gold the second time, there can still be a multitude of reasons:
5. Having a lot of very publicly extremely highly paid people will make people assume anyone working on AI there is highly paid, if not quite as extreme. What most people who make a lot of money spend it on is wealth signalling, and now they can get a form of that without the company having to pay them as much.
What good is a wealth signal without wealth?
You’re promoting vacuous vanity
Higher status, access to higher quality mates, etc.
It might even play role in getting you to US President
> promoting
Where?
Who else would you hire? With a topic as complex as this, it seems most likely that the people who have been working at the bleeding edge for years will be able to continue to innovate. At the very least, they are a much safer bet than some unproven randos.
Exactly this - people that understood the field well enough to add new knowledge to it has to be a pretty decent signal for a research-level engineer.
At the research level it’s not just about being smart enough, or being a good programmer, or even completely understanding the field - it’s also about having an intuitive understanding of the field where you can self pursue research directions that are novel enough and yield results. Hard to prove that without having done it before.
Because the innovations fail to deliver what was promised and the overall costs are higher than the outcome
How about Ilya
Reworded from [1]: Earlier this year Meta tried to acquire Safe Superintelligence. Sutskever rebuffed Meta’s efforts, as well as the company’s attempt to hire him
[1] https://www.cnbc.com/2025/06/19/meta-tried-to-buy-safe-super...
What about him?
Alexnet, AlphaGo, ChatGPT Would argue he did strike gold few times.
I don't follow him very closely. Was he important for these projects?
Yes
Right, what about him? Didn't he start his own company and raised 1 billion a while ago? I haven't heard about them since then.
Didn't he say their goal is AGI and they will not produce any products until then.
I admire that, in this era where CEOs tend to HYPE!! To increase funding (looking at a particular AI company...)
> Didn't he say their goal is AGI and they will not produce any products until then.
Did he specify what AGI is? xD
> I admire that, in this era where CEOs tend to HYPE!! To increase funding (looking at a particular AI company...)
I think he was probably hyping too, it's just that he appealed to a different audience. IIRC they had a really plain website, which, I think, they thought "hackers" would like.
Didn't Meta invest big into the Metaverse, then back track on that, was it $20 billion.
I'd like for these investments to pay off, they're bold but it highlights how deep the pockets are to be able to invest so much.
They didn't just invest they made it core to their identity with the name change and it just fell so so flat because the claims were nonsense hype for crypto pumps. We already had stuff like VR Chat (still going pretty strong) it just wasn't corporate and sanitized for sale and mass monetization.
They're still on it though. The new headset prototypes with high FOV sound amazing, and they are iterating on many designs.
They're already doing something like ~$500M/year in Meta Quest app sales. Granted not huge yet after their 30% cut, but sales should keep increasing as the headsets get better.
That'll pay for 50% of an AI researcher!
They did indeed - see their current name
They spent billions on GPUs and were well positioned to enter the LLM wars
> was it $20 billion
more like 40, yes
More like $80B
When did they backtrack?
I havent seen any evidence that meta is backtracking on VR. Theyve got more than enough money to focus on both, in fact they probably need to. Gen AI is a critical complement of the metaverse. Without gen ai metaverse content is too time consuming to make.
I can see the value in actual AI. But it seems like in many instances how it is being utilized or applied is more related to terrible search functionality. Even for the web, it seems like we’re using AI to provide more refined search results, rather than just fixing search capabilities.
Maybe it’s just easier to throw ‘AI’ (heavy compute of data) at a search problem, rather than addressing the crux of the problem…people not being provided with the tools to query information. And maybe that’s the answer but it seems like an expensive solution.
That said, I’m not an expert and could be completely off base.
> is more related to terrible search functionality
If you looked at $ spent/use case, I would think this is probably the bottom of the list, probably with the highest use of that being in the free tiers.
It sounds like a lot of these big companies are being managed by LLMs and vibes at this point.
> vibes
always has been
(and there's comfort in numbers, no one got fired for buying IBM, etc..)
Def. managed by vibes, but any company that tell you they're not is basically bullshiting
They’d probably be doing significantly better if they were LLM-guided
A trillion dollars of value disappearing in 2 days. We've still got our NFT metaverse shipping waybill project going on somewhere in the org chart, right? Phew!
That's because it was never real to begin with. "Market cap" and "value" are not the same thing. "Value" is "I actually need this and it will dramatically improve my life". "Market cap" is "I can sell this to some idiot".
Really feels like it went from "AI is going to destroy everyone's jobs forever" to "whoops bubble" in about 6 weeks.
It'll be somewhere in between. A lot of capital will be burned, quite a few marginal jobs will be replaced, and AI will run into the wall of not enough new/good training material because all the future creators will be spoiled by using AI.
Even that came after "AI is going to make itself smarter so fast that it's inevitably going to kill us all and must be regulated" talk ended. Remember when that was the big issue?
Haven't heard about that in a while.
I've seen a few people convince themselves they were building AGI trying to do that, though it looked more like the psychotic ramblings of someone entering a manic episode committed to github. And so far none of their pet projects have taken over the world yet.
It's actually kind of reminds me of all those people who snap thinking they've solved P=NP and start spamming their "proofs" everywhere.
Makes sense. Previously the hype was so all-encompassing that CEOs could simply rely on an implicit public perception that it was coming for our jerbs. Once they have to start explicitly saying that line themselves, it's because that perception is fading.
No worries, we’ll be back at the takeover stage in another 6 weeks
I feel like it was never a bubble to begin with. It was a hoax.
Metaverse (especially) or AI might make more sense if you could actually see your friend's posts (and vice versa), if the feed made sense (which it hasn't for years now) and if you could message people who you aren't friends with yet without it getting lost in some 'other' folder you won't discover until 3 years from now (Gmail has a Spam folder problem... but the difference is you can see you have messages there and you can at least check it out for yourself)
What I'm trying to say is make your product the barest minimum usable first maybe? (Also, don't act like, as Jason Calacanis has called it, a marauder, like copying everything from everyone all the time. What he's done with Snapchat is absolutely tasteless and in the case of spying on them - which he's done - very likely criminal)
>> Mr Zuckerberg has said he wants to develop a “personal superintelligence” that acts as a permanent superhuman assistant and lives in smart glasses.
Yann Le Cun has spoken about this, so much that I thought it was his idea.
In any case, how's that going to work? Is everyone going to start wearing glasses? What happens if someone doesn't want to wear glasses?
“ In any case, how's that going to work? Is everyone going to start wearing glasses? What happens if someone doesn't want to wear glasses?”
People probably said the same thing about “what if someone doesn’t want to carry a phone with them everywhere”. If it’s useful enough the culture will change (which, I unequivocally think they won’t be, but I digress)
"grills" are going to come back in a big way
Very few will not want to wear the glasses.
https://memory-alpha.fandom.com/wiki/The_Game_(episode)
Last night I had a technical conversation with ChatGPT that was so full of wild hallucinations at every step, it left me wondering if the main draw of "AI" is better thought of as entertainment. And whether using it for even just rough discovery actually serves as a black hole for the motivation to get things done.
I'm actually a little shocked that AI hasn't been integrated into games more deeply at this point.
Between whisper and lightweight tuned models, it wouldn't be super hard to have onboard AI models that you can interact with in much more meaningful ways that we have traditionally interacted with NPCs.
When I meet an NPC castle guard, it would be awesome if they had an LLM behind it that was instructed to not allow me to pass unless I mention my Norse heritage or whatever.
I imagine tuning this to remain fun would be a real challenge.
“ if the main draw of "AI" is better thought of as entertainment.”
Crazy is true, but that would somewhat follow most tech advancements right?
I don't want to come across as a shill, but I think superintelligence is being used here because the end result is murky and ill-defined at this point.
I think the concept is like: "a tool that has the utility of a 'personal assistant' so much so that you wouldn't have to hire one of those." (Not so much that the "superintelligence" will mimicry a human personal assistant).
Obviously this is just a guess though
Every time you ask it a question you need to cool it off by pouring a bottle of water on your head.
It just won't happen.
I think Mr Zuckerberg greatly underestimates how toxic his brand is. No way I want to become a borg for the "they just trust me, dumb fucks" guy.
The META rebrand was pretty brilliantly done. The makeover far outweighs this sort of sentiment for now.
Selective Freeze? Frank Chu seems to have broken through the ice if so: https://www.macrumors.com/2025/08/22/apple-loses-another-key...
Clickbait title and article. There was a large reorg of genai/msl and several other teams, so things have been shuffled around and they likely don't want to hire into the org while this is finalizing.
A freeze like this is common and basically just signals that they are ready to get to work with the current team they have. The whole point of the AI org is to be a smaller, more focused, and lean org, and they have been making several strategic hires for months at this point. All this says is that zuck thinks the org is in a good spot to start executing.
From talking with people at and outside of the company, I don't have much reason to believe that this is some kneejerk reaction to some supposed realization that "its all a bubble." I think people are conflating this with whatever Sam Altman said about a bubble.
This ^ most of this thread is missing the point
This seems to just be a rewrite of https://www.wsj.com/tech/ai/meta-ai-hiring-freeze-fda6b3c4. Can we replace the link?
This link is not paywalled, unlike the WSJ link.
It's pay wall for me.
https://archive.is/UpELz
https://archive.ph/UqHo8
The problem with sentiment driven market phenomena is they lack fundamental support. When they crash, they can really crash hard. And as much as I see real value in the progress in AI, 95% of the investment I see happening is all based on sentiment right now. Actually deploying AI into real operational scenarios to unlock the value everyone is talking about is going to take many years and it will look like a sink hole of cost well before that. Buckle up.
If Mark had any questions about what would sell the idea around the world and stick with focused future and a team worth the time in AI neuroscience.combine_with_ai_entities to accepting my proposal in marketing ideas to partnership in Lumina Google Cloud sentient_being global with Meta AI X Framework Protection my name is Kevin Pierson piersonkevin290@gmail.com
Maybe this time investors will realize how incompetent these leaders are? How do you go from 250mil contracts to freezes in under a month?
I really don't understand this massive flip flopping.
Do I have this timeline correct?
* January, announce massive $65B AI spend
* June, buy Scale AI for ~$15B, massive AI hiring spree, reportedly paying millions per year for low-level AI devs
* July, announce some of the biggest data centers ever that will cost billions and use all of Ohio's water (hyperbolic)
* Aug, freeze, it's a bubble!
Someone please tell me I've got it all wrong.
This looks like the Metaverse all over again!
The bubble narrative is coming from the outside. More likely is that the /acquisition/ of Scale has led to an abundance of talent that is being underutilised. If you give managers the option to hire, they will. Freezing hiring while reorganising is a sane strategy regardless of how well you are or are not doing.
This. TFA says this explicitly. Alexander Wang, the former Scale CEO is to approve any new hires.
They're taking stock of internal staff + new acquisitions and how to rationalize before further steps.
Now, I think AI investments are still a bubble, but that's not why FB is freezing hiring.
> They're taking stock of internal staff + new acquisitions and how to rationalize before further steps.
Like a toddler collecting random toys in a pile and then deciding what to do with them.
Perhaps. But more like, there's a new boss who wants to understand the biz before doing any action. I've done this personally at a much smaller scale of course.
"abundance of talent" is not something I'd ascribe to Scale.
Yeah, Scale was Amazon's Mechanical Turk for the AI era.
Better strategy of course is to quietly freeze hiring. Perhaps that is not an option for a publicly traded company though.
You could just keep having interviews yet never actually hire anyone based on the talent pool is wide but shallow. It results in the same as a freeze, but without the negative connotation to the company while shifting it to the workforce
wow there's really _zero_ sense of mutual respect in this industry isn't there. it's all just "let's make a buck by being total assholes to everyone around us".
With an employer the size of Meta people would get the clue fairly quick - with inevitable public backlash.
The bubble narrative has been ongoing for a while, but as I understand it, the extremely disappointing response to GPT-5 has spilled things over.
Maybe they are poisoning the well to slow their competitors? Get the funding you need secured for the data centers and the hiring, hire everyone you need and then put out signals that there is another AI winter.
That could eventually screw them over too if they're not careful. That also ascribes cleverness to Meta's C-suit which I don't think exists.
occam's razor
The scale they operate at makes the billions, bucks.
As a board member, I'd rather see a billion-dollar bubble test than a trillion-dollar mistake.
True, and after having just leaned heavy into the "metaverse" I expect they're twice shy now.
The most amusing has to be then Zuckerburg publishes his "thoughts" about how he's betting 100% on AI... written underneath the logo "Meta".
Zuck’s metaverse will be populated by AI characters running in the $65b manhattan data center
The MAU metric must continue to go up, and no one will know if it’s human or NPC
The people "gooning" over Grok's Ani apparently can't wait to take their girlfriends there. ;-)
Zuckerberg's leadership style feels very reactionary and arrogant, defined by flailing around for the new fad and new hyped thing, scrapping everything that when the current obsession doesn't work out and then sticking head in the sand about abandoned projects and ignoring subsequent whiplash.
Remember when he pivoted the entire company to the meta-verse and it was all about avatars with no legs? And how proud they trumpeted when the avatars were "now with legs!!" but still looked so pathetic to everyone not in his bubble. Then for a while it was all about Meta glasses and he was spamming those goofy cringe glasses no one wants in all his instagram posts- seriously if you check out his insta he wears them constantly.
Then this spring/summer it was all about AI and stealing rockstar ai coders from competitors and pouring endless money into flirty chatbots for lonely seniors. Now we have some bad press from that and realizing that isn't the panacea we thought it was so we're in the the phase where this is languishing so in about 6 months we'll abandon this and roll out a new obsession that will be endlessly hyped.
Anything to distract from actually giving good stewardship and fixing the neglect and stagnation of Meta's fundamental products like facebook and insta. Wish they would just focus on increasing user functionality and enjoyment and trying to resolve the privacy issues, disinformation, ethical failures, social harm and political polarization caused by his continued poor management.
> Anything to distract from actually giving good stewardship and fixing the neglect and stagnation of Meta's fundamental products like facebook and insta.
DONT TOUCH THE MONEY-MAKER(S)!!!!
> Zuckerberg's leadership style feels very reactionary and arrogant, defined by flailing around for the new fad and new hyped thing, scrapping everything that when the current obsession doesn't work out and then sticking head in the sand about abandoned projects and ignoring subsequent whiplash.
Maybe he's like this because the first few times he tried it, it worked.
Insta threatening the empire? Buy Insta, no one really complains.
Snapchat threatening Insta? Knock off their feature and put it in Insta. Snap almost died.
The first couple times Zuckerberg threw elbows he got what he wanted and no one stopped him. That probably influenced his current mindset, maybe he thinks he's God and all tech industry trends revolve around his company.
As a DJ, I kinda want the glasses to shoot first-person video :(
By Amara's Law and Gartner Hype cycle every technological breakthrough looks like a bubble. Investors and technologist should already know that. I don't know why they're acting like altcoins in 2021.
1 breakthrough per 99 bubbles would make anyone cautious. The rule should be to assume a bubble is happening by default until proven otherwise by time.
That's actually how you create a death spiral for your company. You have to assume 'growth' and not 'death'. 'life' over 'lost'. 'flourishing' over 'withering'. That you're strong enough to survive.
Apple has seemingly done well waiting until they see a clear consumer direction (and woefully underserved tech there).
That's not playing into a bubble, that's creating a product for a market. You could also argue the Apple Vision is a misplay, or at least premature.
They've also arrogantly gone against consumer direction time and time again (PowerPC, Lightning Ports, no headphone jack, no replaceable battery, etc.)
And finally, sometimes their vision simply doesn't shake out (AirPower)
Oh, yeah — Apple Vision is a complete joke. I'm an Apple apologist to a degree though so I can rationalize all their missteps. I won't deny though they have had many though.
IMHO Mark Zuckerberg is a textbook case of someone who got lucky once by basically being in the right place at the right time, but who attributes his success to his skill. There’s probably a proper term for this.
I think that meta is bad for the world and that zuck has made a lot of huge mistakes but calling him a one hit wonder doesn't sit right with me.
Facebook made the transition to mobile faster than other competitors and successfully kept G+ from becoming competition.
The instagram purchase felt insane at the time ($1b to share photos) but facebook was able to convert it into a moneymaking juggernaut in time for the flattened growth of their flagship application.
Zuck hired Sheryl Sandburg and successfully turned a website with a ton of users into an ad-revenue machine. Plenty of other companies struggled to convert large user bases into dollars.
This obviously wasn't all based on him. He had other people around him working on this stuff and it isn't right to attribute all company success to the CEO. The metaverse play was obviously a legendary bust. But "he just got lucky" feels more like Myspace Tom than Zuckerberg in my mind.
No one else is adding the context of where things were at the time in tech...
> The instagram purchase felt insane at the time ($1b to share photos) but facebook was able to convert it into a moneymaking juggernaut in time for the flattened growth of their flagship application.
Facebook's API was incredibly open and accessible at the time and Instagram was overtaking users' news feeds. Zuckerberg wasn't happy that an external entity was growing so fast and onboarding users so easily that it was driving more content to news feeds than built-in tools. Buying Instagram was a defensive move, especially since the API became quite closed-off since then.
Your other points are largely valid, though. Another comment called the WhatsApp purchase "inspired", but I feel that also lacks context. Facebook bought a mobile VPN service used predominantly by younger smartphone users, Onavo(?), and realized the amount of traffic WhatsApp generated by analyzing the logs. Given the insight and growth they were monitoring, they likely anticipated that WhatsApp could usurp them if it added social features. Once again, a defensive purchase.
I don't think we can really call the instagram purchase purely defense. They didn't buy it and then slowly kill it. They bought it and turned it into a product of comparable size to their flagship with sustained large investment.
Whatsapp was also an inspired move
Yeah and also much deserving of antitrust
Buying competitors is not insane or a weird business practice. He was probably advised to do so by the competent people under him
And what did he do to keep G+ from becoming a valid competitor? It killed itself. I signed up but there was no network effect and it kind of sucked. Google had a way of shutting down all their product attempts too
If you read Internal Tech Emails (on X), you’ll see that he was the driving force behind the key acquisitions (successes as well as failures such as Snap).
I am also not saying that zuck is a prescient genius who is more capable than other CEOs. I am just saying that it doesn't seem correct to me to say that he is "a textbook case of somebody who got lucky once."
I hate pretty much everything about Facebook but Zuckerberg has been wildly successful as CEO of a publicly traded company. The market clearly has confidence in his leadership ability, he effectively has had sole executive control of Facebook since it started and it's done very well for like 20 years now.
>has been wildly successful as CEO of a publicly traded company.
That has a lot to do with the fact that it's a business centric company. His acumen has been in user growth, monetization of ads, acquisitions and so on. He's very similar to Altman.
The problems start when you try to venture into hard technological topics, like the Metaverse fiasco, where you have to have a sober and engineering oriented understanding of the practical limits of technology, like Carmack who left Meta pretty frustrated. You can't just bullshit infinitely when the tech and not the sales matter.
Contrast it with Gates who had a serious programming background, he never promised even a fraction of the cringe worthy stuff you hear from some CEOs nowadays because he would have known it's nonsense. Or take Apple, infinitely more sane on the AI topic because it isn't just a "more users, more growth, stonks go up" company.
He's really not. Facebook is an extremely well run organization. There's a lot to dislike about working there, and there's a lot to dislike about what they do, but you cannot deny they have been unbelievably successful at it. He really is good at his job, and part of that has been making bold bets and aggressively cutting unsuccessful bets.
Facebook can be well run without that being due to Zuck.
There are literally books that make this argument from insider perspectives (which doesn't mean it's true, but it is possible, and does happen regularly).
A basketball team can be great even if their coach sucks.
You can't attribute everything to the person at the top.
That is true but in Meta’s case, it is tightly managed by him. I remember a decade ago a friend was a mid-level manager and would have exec reviews to Zuck, who could absorb information very quickly and redirect feedback to align with his product strategy.
He is a very hands CEO, not one who is relying on experts to run things for him.
In contrast, I’ve heard that Elon has a very good senior management team and they sort of know how to show him shiny things that he can say he’s very hands on about while they focus on what they need to do.
He created the company, if it is well run it was thanks to him hiring the right people. Regardless how you slice it he is a big reason it didn't fail, most companies like that fails when they scale up and hire a lot of people but facebook didn't, hiring the right people is not luck.
hmm... Oculus Quest something something.
I can’t tell if you’re being tongue in cheek or not, so I’ll respond as if you mean this.
It’s easy to cherry pick a few bets that flopped for every mega tech company: Amazon has them, Google has them, remember Windows Phone? etc.
I see the failures as a feature, not a bug - the guy is one of the only founder CEOs to have ever built a $2T company (trillion with a T). I imagine part of that is being willing to make big bets.
And it also seems like no individual product failure has endangered their company’s footing at all.
While I’m not a Meta or Zuck fan myself, using a relatively small product flop as an indication a $2T tech mega corp isn’t well run seems… either myopic or disingenuous.
Parent comment says "aggressively cutting unsuccessful bets" and Oculus is nothing like that.
Oculus Quest are decent products, but a complete flop compared to their investment and Zuck's vision of the metaverse. Remember they even renamed the company? You could say they're on betting on the long run, but I just don't see that happening in 5 or even 10 years.
As an owner of Quest 2 and 3, I'd love to be proven wrong though. I just don't see any evidence of this would change any time soon.
The VR venture can also be seen as a huge investment in hard tech and competency around issues such as location tracking and display tech for creating AI-integrated smartglasses, which many believe is the next gen AI interface. Even if the current headsets or form factor do not pay off, I think having this knowledge coud be very valuable soon.
I don’t think their “flops” of Oculus or Metaverse have endangered their company in any material way, judging by their stock’s performance and the absurd cash generating machine they have.
Even if they aren’t great products or just wither into nothing, I don’t think we will be see a HBS case study in 20 years saying, “Meta could have been a really successful company, but were it for their failure in these two product lines”
laughs in metaverse
Absolutely, not everything they do will succeed but that's okay too, right? At this point their core products are used by 1 in 2 humans on earth. They need to get people to have more kids to expand their user base. They're gonna throw shit at the wall and not everything will stick, and they'll ship stuff that's not quite done, but they do have to keep trying; I can't bring myself to call that "failure."
their core product IS 1 of 2 humans on earth.
the product is used by advertisers to sell stuff to those humans.
So you're saying the ad revenue sort of allows them to "be their own bank?"
Then they can bankroll their own new entrepreneurial ideas risk-free, essentially.
They are a publicly traded company, shareholders are their bank.
What's a buyback?
You laugh, but the Oculus is amazing. I use it as part of my daily workouts.
I agree, but that does not make Oculus a commercially successful and viable product. They are still bleeding cash on it, and VR is not going mainstream any time soon.
fuck oculus and lucky palmer.
I have hundreds of hours building and tinkering on the original kickstarter kit and then they sold to FB and shut down all the open source stuff.
To be fair, buying Instagram so early + WhatsApp were great moves too.
But they were less “skill” and more “surveillance”. He had very good usage statistics (which he shouldn’t have had) of these apps through Onavo - a popular VPN app Facebook bought for the purpose of spying on what users are doing outside Facebook.
Instagram was acquired before Onavo.
That’s true, but I remember rumors at the time that Onavo already supplied analytics to FB at the time of instagram purchase.
Google gave me a paywalled link to FTCWatch that supposedly has the details, but I can’t check.
Yes, they did supply data to FB (under the same contracts as everyone else). However, it was only supplied to the sales org, not the product org.
FB acquired IG because it was blowing up in SF and MZ (other leaders too) were looking at how quickly it appeared to be growing and how good it was.
WhatsApp is certainly worth less today than what they paid for it plus the extra funding it has required over time. Let alone producing anything close to ROI. Has lost them more money than the metaverse stuff.
Insta was a huge hit for sure but since then Meta Capital allocation has been a disaster including a lot of badly timed buybacks
Typically when a company is flush with cash, acquisitions become an obvious place to put that money.
Except a company like Google with all its billions could not compete with Facebook. o Mark did something right.
Or it's really hard to overcome network effects, no matter how good your product is.
You can do a lot of things with billion dollars and when you have the biggest email service.
> IMHO Mark Zuckerberg is a textbook case of someone who got lucky once by basically being in the right place at the right time, but who attributes his success to his skill.
It is no secret that the person who turned Facebook into a money-printing machine is/was Sheryl Sandberg.
Thus, the evidence is clear that Mark Zuckerberg had the right idea at the right time (the question is whether this was because of his skills or because he got lucky), but turning his good idea(s) into a successful business was done by other people (lead by Sheryl Sandberg).
And isn’t the job of a good CEO to put the right people in the right seats? So if he found a superstar COO that took the company into the stratosphere and made them all gazillionaires…
Wouldn’t that indicate, at least a little bit, a great management move by Zuck?
it wasn't sheryl
Who was it then?
It was YOU! As an IC, remember: you're one of the most valuable assets at Meta! Without you, we couldn't build cool products like...
etc. etc.
I mean, there's also a reason the board hasn't ousted him.
>IMHO Mark Zuckerberg is a textbook case of someone who got lucky once by basically being in the right place at the right time
How many people also where at the right place and right time and were lucky then went bankrupt or simply never made it this high?
And he didn't even come up with the idea, he stole it all. And then he stole the work from the people he started it with...
You're probably going to get comments like "Social networking existed before. You can't steal it". Well, on top of derailing someone else's execution of said non-stole idea (or something) which makes you a jerk, in the case of those he 'stole'/stole from, for starters maybe it was existing code (I don't know if that was ever proven), but maybe it was also the Winklevosses idea of using .edu email addresses, and possibly other concepts
Do I think he stole it? Dunno. (Though Aaron Greenspan did log his houseSYSTEM server requests, which seems pretty damning) But given what he's done since (Whatsapp, copying every Snapchat feature)? I'd say the likelihood is non-zero
The term you're looking for is "billionaire". The amount of serendipity in these guys' lives is truly baffling, and only becomes more apparent the more you dig. It makes sense when you realize their fame is all survivorship bias. Afer all, there must be someone at the tail end of the bell curve.
It is at least a little suspicious that one week he's hiring like crazy, then next week, right after Sam Altman states that we are in an AI bubble, Zuckerberg turns around and now fears the bubble.
Maybe he's just gambling that Altman is right, saving his money for now and will be able to pick up AI researcher and developers at a massive discount next year. Meta doesn't have much of a presence in the space market right now, and they have other businesses, so waiting a year or two might not matter.
You have to assume they all have each other's phone numbers, right?
Ehh. You don’t get FB to where it is by being incompetent. Maybe he is not the right leader for today. Maybe. But you have to be right way, way more often than not to create a FB and get it to where it is. To operate from where it started to where it is just isn’t an accident or Dunning-Kruger.
Maybe this time the top posters on HN should stop criticizing one of the top performing founder CEOs of the last 20 years who built an insane business, made many calls that were called stupid at the time (WhatsApp), and many that were actually stupid decisions.
Like do people here really think making some bad decisions is incompetence?
If you do, your perfectionism is probably something you need to think about.
Or please reply to me with your exact perfect predictions of how AI will play out in the next 5, 10, 20 years and then tell us how you would run a trillion dollar company. Oh and please revisit your comment in these timeframes
I think many people just really dislike Zuckerberg as a human being and Meta as a company. Social media has seriously damaged society in many ways.
It’s not perfectionism, it’s a desire to dunk on what you don’t like whenever the opportunity arises.
It's an entertainment tool. Like a television or playstation. Only a fool would think social media is anything more.
Sure, but society is full of fools. Plenty of people say social media is the primary way they get news. Social media platforms are super spreaders of lies and propaganda.
I don't think it's about perfect predictions. It's more about going all in on Metaverse and then on AI and backtracking on both. As a manager you need to use your resources wisely, even if they're as big as what Meta has at its disposal.
The other thing - Peter's principle is that people rise until they hit a level where they can't perform anymore. Zuck is up there as high as you can go, maybe no one is really ready to operate at that level? It seems both him and Elon made a lot of bad decisions lately. It doesn't erase their previous good decisions, but possibly some self-reflection is warranted?
> Like do people here really think making some bad decisions is incompetence?
> If you do, your perfectionism is probably something you need to think about.
> Or please reply to me with your exact perfect predictions of how AI will play out in the next 5, 10, 20 years and then tell us how you would run a trillion dollar company.
It's the effect of believing (and being sold) meritocracy, if you are making literal billions of dollars for your work then some will think it should be spotless.
Not saying I think that way but it's probably what a lot of people consider, being paid that much signals that your work should be absolutely exceptional, big failures just show they are also normal flawed people so perhaps they shouldn't be worth million times more than other normal flawed people.
He’s not “being paid that much”
He’s earned almost all his money through owning part of a company that millions of shareholders think is worth trillions, and does in fact generate a lot of profits.
A committee didn’t decide Zuckerberg is paid $30bn.
And id say his work is pretty exceptional. If it wasn’t then his company wouldn’t be growing. And he’d probably be pressured into resigning as CEO
Yes, I do know all of that semantics, thanks for stating the obvious.
Being rewarded for creating a privacy destroying advertising empire, exceptional work. Imagine a world where the incentives were a bit different, we might have seen other kind of work rewarded instead of social media and ads.
By signing too many 250mil contracts.
Well that’s the incompetent piece. Setting out to write giant historical employment contracts without a plan is not something competent people do. And seemingly it’s not that they over extended a bit either since reports claimed the time availability of the contracts was extremely limited; under 30min in some cases.
Yes.
Perhaps it was this: Lets hit the market fast, scoop up all the talent we can before anybody can react, then stop.
I don't think there is anybody that would expect they would 'continue' offering 250million packages. They would need to stop eventually. They just did it fast, all at once, and now stopped.
Or enough of them! >=
> How do you go from 250mil contracts to freezes in under a month?
Easy, you finished building up a team. You can only have so many cooks.
That's not really how that works in the corporate/big tech world. It's not as though Meta set out and said "Ok we're going to hire exactly 150 AI engineers and that will be our team and then we'll immediately freeze our recruiting efforts".
how tf do you know when you are "finished"
Some people actually accepted the contracts before the uno reverse llamabot could activate and block them
I'm still waiting for a single proof that there was any contract in the hundreds of millions that was signed.
The damage is already done though. If I worked for Meta and did not get millions, I think I would be pretty irate.
Which is definitely why Sam Altman started this rumor.
Why are you assuming the investors are competent?
Yes, people who struck it rich are not miraculously more intelligent or capable. Seems obvious, but many people believe they are.
don't forget the 115th Rule of Acquisition
greed IS eternal
Because you want the ability to low-ball prospective candidates sooner rather than later.
Could they get their money back?
This is Meta, named after the fact that the Metaverse is undoubtedly what comes next.
So he be also read the recent article on Sam Altman saying it was a bubble?
That's a man with conviction
Sorry, I was in the metaverse just now. I took my headset off though — could you please repeat that?
Haven’t been there in a while, did they figure out how to give people, I don’t know, legs and stuff?
You will have no genitals, and you will like it.
But genitals is like, the whole point of VR!
The two biggest Metaverse plays have had them since launch!
http://fortnite.com/
http://roblox.com/
Yes
the metaverse push is the perfect analogy
cool fun concepts/technology fucked by the worlds most boring people who only have desire to dominate markets and attention.. god forbid anything happen slowly/gradually without it being about them
Fucked? Have you tried the latest Quest3 experience? It would be nowhere near this if it was not for Meta and other big corps.
Second, did you see the amount of fun content on the store? It's insane. People who are commenting on the Quest have obviously never even opened the app store there.
How did he run out of money so fast? Think Zuck is one of those guys who get sucked into hype cycles and no one around him will tell him so. Even investors.
I’ve never seen so much evidence for a bubble yet so much potential to be the biggest Thing ever.
Just getting a lot of mixed signals right now. Not sure what to think.
Personally, I think it's both! It's a bubble, but it's also going to be something that slowly but steadily transforms the world in the next 10-20 years.
People seem very confused thinking that something can't both be valuable AND a bubble.
Just look at the internet. The dot com bubble was one of the most widely recognised bubbles in history. But would you say the internet was a fad that went away? That there was no value there?
There's zero contradiction at all in it being both.
We might see another AI winter first, is my assumption. I believe that LLMs are fundamentally the wrong approach to AGI, and that bubble is going to burst until we have a better methodology for AGI.
Unfortunately, the major players seem focused on 'getting to AGI pretention through LLM'.
Dot-com was the same way... the Internet did end up having the potential everyone thought it would, businesses just didn't handle the influx of investment well.
Yeah, it truly IS transformative for industries, no denying anymore at this point. What we have will remain even after a pop. But I think AI was special in how there were massive improvements the more compute you threw at it for years. But then we ran out of training material and suddenly things got much harder. It’s this ramping up of investments to spearhead transformative tech and suddenly someone turns off the tap that makes this so conflicted. I think.
There is potential but it does seem like just throwing more money at LLMs is not going to get us to where the bubble expects
It's, like, a vision thing. You know?
https://www.threads.com/@professor_neil/post/DNiVLYptCHL/im-...
I think the internet is bigger than AI.
Without the internet there is no AI.
... people said the same thing about the "metaverse" just a few years ago. "You know people are gonna live their entire lives in there! It's gonna change everything!" And 99% of people who heard that laughed and said "what are you smoking?" And I say the same thing when I hear people talk about "the AI machine god!"
I just did a phone screen with Meta, and the interviewer asked for Euclidean distance between two points; they definitely have some nerds in the building.
That's like 8th grade math, what am misunderstanding about your comment?
E: wasn't the only one.
K closest points using Euclidean distance and a heap, is not 8th grade math, although any 8th grade math problem can be transformed into a difficult "adult" question. Sums are elementary, asking to find a window of prefix sums that add up to something is still addition, but a little more tricky
People saying it is a high school maths problem! I'd like to see you provide a general method for accurately measuring the distance between two arbitrary points in space...
Using a heap in 10 minutes, the Euclidean distance formula was given and had to be used in the answer; maybe they thought that was the question?
Compare to the speed of light, c at your two reference frames
They actually need to know so that they can train Llama.
I suppose the trick is to have an ipad running GPT-voice-mode off to the side, next to your monitor. Instruct it to answer every question it overhears. This way you'll ace all of the "humiliation ritual" questions.
there's a youtube channel made by a meta engineer, he said to memorize the top 75 LeetCode Meta questions and approaches. He doesn't say fluff like "recognize patterns. My interviewer was 3.88/4 GPA masters Comp Sci guy from Penn, I asked for feedback he said always be studying its useful if you want a career...
it wasn't just euclidean distance of course, it was this leetcode problem k closest points to origin https://leetcode.com/problems/k-closest-points-to-origin/des..., I thought if I needed a heap I would have to implement it myself didn't know I can use a library
I.e. the nearest neighbor problem. Presumably seeing if the candidate gave a naive solution and was able to optimize or find a more ideal solution
its not a nearest neighbor problem that is incorrect, they expect candidates to have the heap solution on the first go, you have 10-15 minutes to answer, no time to optimize, cheaters get blacklisted, welcome to the new reality
Finding the k points closest to the origin (or any other point) is obviously the k-nearest neighbors problem. What algorithm and data structure you use does not change that.
https://en.wikipedia.org/wiki/Nearest_neighbor_search
edit: If you want to use a heap, the general solution is to define an appropriate cost function; e.g., the p-norm distance to a reference point. Use a union type with the distance (for the heap's comparisons) and the point itself.
true, I am thinking, Node and neighbors, this is a heap problem, it actually does matter what algorithm you use, I learn that the hard way today, trying to implement quickselect vs using a heap library (I didn't know you could do that) is much easier, don't make the same mistake!
Wish it was more how you think than requiring boolean correct/incorrect answer on the whiteboard after 15min.
I don't get the joke.
no joke, stop pretending like you know the answer to every LeetCode question that utilizes Euclidean distance
That's basic high school math problem.
The foundation, like every LeetCode problem, is a basic high school math problem, when the foundation of the problem is trigonometry, way harder than stack, arrays, linked list, bfs, dfs...
Previous articles and comments were "Praise Mark for being brave enough to go all in on AI!"
Now we have this ;)
Dear lord can Meta hiring be any more unstable? HR dept must be a revolving door at this point
I got an email recently from a Meta recruiter of I'm interested in a non technical leadership position. I'm a programmer.
Does this mean the ai companies will start charging more? I only just started figuring this AI thing out.
They're all bleeding money so yes it's inevitable.
It's always the same thing, uber, food delivery, escooter, &c. they bait you with cheap trials and stay cheap until the investors money run out, and once you're reliant on them they jack up the prices as high as they can.
Someone needs to finance absurd operational costs if the services are supposed to stick around.
May the enshittification begin.
they are just following the thiel playbook: race to monopoly position as fast as possible, then extract profits afterwards (which inevitably leads to the inevitable shitnization)
> Sam Altman, OpenAI’s chief executive, has compared hype around AI to the dotcom bubble at the turn of the century
Sam is the main one driving the hype, that's rich...
> Sam is the main one driving the hype, that's rich...
It's also funny that he's been accusing those who accept better job offers as mercenaries. It does sound like the statements try to modulate competition both in the AI race and in acquiring the talent driving it.
Or now that he has the money he wants people to stop investing in the competition.
Now that you mention it, there's been a very sudden tone shift from both Altman and Zuckerberg. What's going on?
GPT-5 was a massive disappointment to people expecting LLMs to accelerate to the singularity. Unless Google comes out with something amazing in the next Gemini, all the people betting on AI firms owning the singularity will be rethinking their bets.
But then, he's purposely comparing it to the .com bubble - that bubble had some underlying merit. He could compare it to NFTs, the metaverse, the South Sea Company. It wouldn't make sense for him to say it's not a bubble when it's patently clear, so he picks his bubble.
Facebook, Twitter, and some others made it out of the social media bubble. Some "gig" apps survived the gig bubble. Some crypto apps survived peak crypto hype
Not everyone has to lose which he's presumably banking on
Right, he's hoping to be Amazon rather than Pets.com in the sitcom bubble analogy.
Nvidia earnings next week. Thats the bell weather -everything else is speculation.
To me AI is a like phone business. A few companies (Apple,Samsung) will manage to score a homerun and the rest will be destined to offer commoditized products.
And Meta doesn't want to miss the phone train this time
Maybe they are trying to signal to the AI talent in general to temper their expectations while simultaneously chasing rockstars with enormous sums.
And just three weeks ago I was suggesting a crash might hurt bad when this very meta announced 250 million dollar salary packages.
Makes you think whether llama progress is not doing too well and/or perhaps we're entering a plateau for llm architecture development.
The article got me thinking that there's some sort of bottle neck that makes scaling astronomical or the value just not really there.
1. Buy up top talent from other's working in this space
2. See what they produce over say, 6mo. to a year
3. Hire a corpus of regular ICs to see what _they_ produce
4. Open source the model to see if any programmer at all can produce something novel with a pretty robust model.
Observe that nothing amazing has really come out (besides a pattern-recognizing machine that placates the user to coerce them into using more tokens for more prompts), and potentially call it on hiring for a bubble.
> Observe that nothing amazing has really come out
I wouldn't say so. The problem is rather that some actually successful applications of such AI models are not what companies like Meta want to be associated with. Think into directions like AI boyfriend/girlfriend (a very active scene, and common usage of locally hosted LLMs), or roleplaying (in a very broad sense). For such applications, it matters a lot less if in some boundary cases the LLM produces strange results.
If you want to get an impression of such scenes, google "character.ai" (roleplaying), or for AI boyfriend/girlfriend have a look at https://old.reddit.com/r/MyBoyfriendIsAI/
On the other hand, it's been shown time and again that we should do the opposite of whatever Zuck says.
That's how Zuck is. Gets excited and overhires.
I saw this during COVID and we were hiring like crazy.
Metaverse people: “We’re so back!”
Title is a bit misleading. Meta freezes hiring after acquiring and hiring a ton while, somewhere else, Altman says it's a bubble.
The more obvious reason for a freeze is they just got done acquiring a ton of talent
I genuinely believe SamA has directed GPT5 to be nerfed to speedrun the bubble. Watch, he’ll let the smoking embers of the AI market cool and then reveal the next big advancement they are sitting on right now.
Nothing would give me a nicer feeling of schadenfreude than to see Meta, Google, and these other frothing-at-the-mouth AI hucksters take a bath on their bets.
Can we try to not turn HN into this? I come to this forum to find domain experts with interesting commentary, instead of emotionally charged low effort food fights.
your comment somehow feels more emotionally charged and low effort than the original. here, let's continue that...
Why would that give you a nice feeling?
Because companies like Google and Meta, which are responsible for so much social damage, deserve a little comeuppance every once in a while.
Just until some month ago, people on HN were shouting people down who argued that spending big money on building an AI might not be a good idea ...
How did this get pushed of the front page with over 100 points in less then an hour? YC does not like that kind of articles?
#comments >>> #points --> flame war detector
Did the board realize Zuck was out of his mind or what?
Does it matter?
Zuckerberg holds 90% of the class B supershares. There isn't much the board can do when the CEO holds most of the shareholder votes.
all news is being manipulated to short stock and make a ton of money shorting stock, based on perceived bad news
Moving fast or sheep-like behavior?
Similar to my view of AI, there is a huge bubble in current AI. Current AI is nothing more than a second-hand information processing model, with inherent cognitive biases, lagging behind environmental changes, and other limitations or shortcomings.
It never really made sense for Meta to get into AI, the motivations were always pretty thin and it seemed like they just wanted to ride the wave.
Isn't that what companies are supposed to do by seeing/following/setting trends in a way that increases revenue and profit for the shareholders?
I somewhat disagree here. Meta is a huge company with multiple products. Experimenting with AI and trying to capitalize on what's bound to be a larger user market, is a valid company angle to take.
It might not pan out, but it's worth trying from a pure business point of view.
Meta's business model is to capture attention - largely with "content" - so they can charge lots of money to sprinkle ads amongst that content.
I can see a lot of utility for Meta to get deeply involved in the unlimited custom content generating machine. They have a lot of data about what sort of content gets people to spend more time with them. They now have the ability to endlessly create exactly what it is that keeps you most engaged and looking at ads.
Frankly, content businesses that get their revenue from ads are one of the most easily monetizable ways to use the outputs of AI.
Yes, it will pollute the internet to the point of making almost all information untrustable, but think of how much money can be extracted along the way!
The whole point is novelty/authenticity/scarcity though, if you just have a machine that generates infinite infinitely cute cat videos then people will cease to be interested in cat videos. And its not like they pay content creators anyway.
It's Spain sinking their own economy by importing tons of silver.
We have services that will serve you effectively infinite cat videos, and neither cat videos or the websites that do that have ceased to be popular.
It is actually the basis for the sites that people tend to spend most of their time and attention on.
Facebook, Instagram, Reddit, TikTok all live on the users that only want to see infinite cat videos (substitute cat video for your favorite niche). Already much of the content is AI generated, and boy does it do numbers.
I am not convinced that novelty, authenticity, or scarcity matter in the business model. If they do, AI has solved novelty, has enough people fooled with authenticity, and scarcity... no one wants their cat video feed to stop.
The money committed to payroll for these supposed top AI is equivalent to a mid size startup’s payroll, no wonder they had to hit pause.
One of the near top signals of this AI bubble.
In this hype cycle, you are in late 1999, early 2000.
All that's missing is an analogue to the pets.com Superbowl ad!
They should put AI into a blockchain and have a proper ICO.
you laugh, but someone actually did that and raised like $300m.
https://0g.ai/blog/0g-ecosystem-receives-290m-in-financing-t...
its all bullshit obviously, grift really seems like the way to go these days.
Smart contracts sometimes fail because they are executed too literally. Fixing that needs something like judges, but automated - so AI! It will be perfect. /s
It depends on your oracle. According to the Shiller PE of the S&P 500 it's only November '98! [0]
[0] https://www.multpl.com/shiller-pe
The team drew criticism from executives this spring after the release of the latest Llama models underperformed expectations.
interesting
Its almost like nobody asked for the dramatic push of ai, and it was all created by billionaires trying to become even richer at the cost of people's health and the environment.
I still have yet to see it do anything useful. I've seen several very impressive "parlor tricks" which a decade ago i thought were impossible (image generation, text-parsing, arguably passing the turing-test) but I still haven't seen anybody use AI in a way that solves a real problem which doesn't already have an existing solution.
I will say that grok is a very useful research assistant for situations where you understand what you're looking at but you're at an impasse because you don't know what its name is and are therefore unable to look it up, but then it's just an incremental improvement over search-engines rather than a revolutionary new technology.
LLMs are not the way to AGI and it's becoming clearer to even the most fanatic evangelists. It's not without reason GPT-5 was only a minor incremental update. I am convinced we have reached peak LLM.
There's no way a system of statistical predictions by itself can ever develop anything close to reasoning or intelligence. I think maybe there might be some potential there if we combined LLMs with formal reasoning systems - make the LLM nothing more than a fancy human language <-> formal logic translator, but even then, that translation layer will be inherently unreliable due to the nature of LLMs.
Yup. I agree with you.
We're finally reaching the point where it's cost-prohibitive to sweep this fact under the rug with scaling out data centers and refreshing version numbers to clear contexts.
Good call in this case specifically, but lord this is some kind of directionless leadership despite well thought out concerns over the true economic impact of LLMs and other generative AI tech.
Useful, amazing tech but only for specific niches and not as generalist application that will end and transform the world as we know it.
I find it refreshing to browse r/betteroffline these days after 2 years of being bombarded with grifting LinkedIn lunatics everywhere you look.
Must be nice to do whatever he wants, without worrying about consequences…
Is this the stage of the bubble where they burst the bubble by worrying that there’s a bubble?
> Mark Zuckerberg has blocked recruitment of artificial intelligence staff at Meta, slamming the brakes on a multibillion-dollar hiring spree amid fears of an AI bubble.
> amid fears of an AI bubble
Who told the telegraph that these two things are related? Is it just another case of wishful thinking?
Mark created the bubble. Other investors saw few opportunities for investment. So they put more Money into a few companies.
What we need is more independent and driven innovation.
Right right now the greatest obstacle to independent innovation is the massive data bank the bigger companies have.
in a few months.."sorry my whims proved wrong again, so we'll take the healthcare and stability away from I guess 10% of you."
I feel like the giant 100 mil /1 billion salaries could have been better spent just hiring a ton of math, computer science, data science graduates and just forming an an AI skunkworks out of them.
Also throw in a ton of graduates from other fields/sciences, arts, psychology, biology, law , finance, or whatever else you can imagine to help create data and red team their fields.
Hiring people with creative writing and musical skills to give it more samples of creative writing and song writing, summarization etc
And people that are good at teaching and breaking complex problems into easier to understand chunks for different age brackets.
Their userbase is big but it's not the same as ChatGTP's, they won't get the same tasks to learn from users that chatgpt does.
Is it just me or does it feel like billionaires of that ilk can never go broke no matter how bad their decisions are? The complete shift to the metaverse, the complete shift to LLMs and fat AI glasses, the bullheaded “let’s suck all talents out of the atmosphere” phase and now let’s freeze all hiring. In a handful of years.
And yet, billionaires will remain billionaires. As if there are no consequences for these guys.
Meanwhile I feel another bubble burst coming that will hang everyone else high and dry.
the top100 richest people on the globe can do a lot more stupid stuff and still walk away to a comfortable retirement, whereas the bottom 10-20-.. percent doesn't have this luxury.
not to mention that these rich guys are playing with the money of even richer companies with waaay too much "free cash flow"
Why is he always behind the curve, always?
To be fair it worked for him the first time. And for Apple too multiple times for that matter.
He and his company have not innovated since they created facebook. They bought their success after that
[dupe] Earlier: https://news.ycombinator.com/item?id=44968111
It could be that, beyond the AI bubble, there may be a broader understanding of economic conditions that Meta likely has. Corporate spending cuts often follow such insights.
"Now let's make the others doubt that this is a meaningful investment"
After phase 1, "the shopping spree".
"Mark Zuckerberg freezes AI hiring after he personally offered 250M to a single person and the budget is now gone."
How to make a bubble pop: announce a trillion dollar company has stopped hiring in that area.
“Man who creates bubble now fears it”
If AI really is a bubble and somehow imploded spectacularly for the rest of this year, universities would continue to spit out AI specialists for years to come. Mr. Z. will keep hiring them into every opening that comes up whether he wants to or not.
Silicon Valley has never seen a true bubble burst, even the legendary dot-com bubble was a minor setback from which the industry was fully recovered in about 5-10 years.
I have been saying for at least 15 years now that eventually Silly Valley will collapse when all these VCs stop funding dumb startups by the hundreds in search of the elusive "unicorns", but I've been wrong at every turn as it seems that no matter how much money they waste on dumb bullshit the so-called unicorns actually do generate enough revenue to make funding dumb startup ideas a profitable business model....
Explains why AI companies like Windsurf were hunting for buyers to hold the bag
as an outsider, what I find the most impressive is how long it took for people to realize this was a bubble.
Has been for a few years now.
Note: I was too young to fully understand the dot com bubble, but I still remember a few things.
The difference I see is that, conversely to websites like pets.com, AI gave the masses something tangible and transformative with the promise it could get even better. Along with these promises, CEOs also hinted at a transformative impact "comparable to Electricity or the internet itself".
Given the pace of innovation in the last few years I guess a lot of people became firm believers and once you have zealots it takes time for them to change their mind. And these people surely influence the public into thinking that we are not, in fact, in a bubble.
Additionally, the companies that went bust in early 2000s never had such lofty goals/promises to match their lofty market valuations and in lieu of that current high market valuations/investments are somewhat flying under the radar.
> The difference I see is that, conversely to websites like pets.com, AI gave the masses something tangible and transformative with the promise it could get even better.
The promise is being offered, that's for sure. The product will never get there, LLMs by design will simply never be intelligent.
They seem to have been banking on the assumption that human intelligence truly is nothing more than predicting the next word based on what was just said/thought. That assumption sounds wrong on the face of it and they seem to be proving it wrong with LLMs.
I agree with you fully.
However, even friends/colleagues that like me are in the AI field (I am more into the "ML" side of things) always mention that while it is true that predicting the next token is a poor approximation of intelligence, emergent behaviors can't be discounted. I don't know enough to have an opinion on that, but for sure it keeps people/companies buying GPUs.
> but for sure it keeps people/companies buying GPUs.
That's a tricky metric to use as an indicator though. Companies, and more importantly their investors, are pouring mountains of cash in the industry based on the hope of what AI may be in the future rather than what it is today. There are multiple incentives that could drive the market for GPUs, only a portion of those have to do with today's LLM outputs.
You really can't compare "AI" to a single website, it makes no sense
It was an example. Pets.com was just the flagship (at least in my mind), but during the dot com bubble there were many many more such sites that had an inflated market value. I mean, if it was just one site that crashed then it wouldn't be called a bubble.
From the Big Short: Lawrence Fields: "Actually, no one can see a bubble. That's what makes it a bubble." Michael Burry: "That's dumb, Lawrence. There are always markers."
Ah Michael Burry, the man who has predicted 18 of our last 2 bubbles. Classic broken clock being right, and in a way, perfectly validates the "no one can see a bubble" claim!
If Burry could actually see a bubble/crash, he wouldn't be wrong about them 95%+ of the time... (He actually missed the covid crash as well, which is pretty shocking considering his reputation and claims!)
Ultimately, hindsight is 20/20 and understanding whether or not "the markers" will lead to a major economic event or not is impossible, just like timing the market and picking stocks. At scale, it's impossible.
I feel 18 out of 2 isn't a good enough statistic to say he is "just right twice a day".
What was the cost of the 16 missed predictions? Presumably he is up over all!
Also doesn't even tell us his false positive rate. If, just for example, there were 1 million opportunities for him to call a bubble, and he called 18 and then there were only 2, this makes him look much better at predicting bubbles.
If you think that predicting economic crash every single year since 2012 and being wrong (Except for 2020, when he did not predict crash and there was one), is good data, by all means, continue to trust the Boy Who Cried Crash.
This sets up the other quote from the movie: Michael Burry: “I may be early but I’m not wrong”. Investor guy: “It’s the same thing! It's the same thing, Mike!”
Confirmation bias much?
what happened to the metaverse??? I thought we finally had legs!
Seriously why does anyone take this company seriously? Its gotta be the worst of the big tech, besides maybe anything Elon touches, and even then...
1. They've developed and open sourced incredibly useful and cool tech
2. They have some really smart people working there
3. They're well run from a business/financial perspective, especially considering their lack of a hardware platform
4. They've survived multiple paradigm shifts, and generally picked the right bets
Among other things.
0. Many people use Facebook messenger as primary contact book.
Even my parents are on Facebook messenger.
Convincing people to use signal is not easy, and there are lots of people I talk to whose phone number I don't have.
all of those are basically true for every big tech company
So we agree that Meta is at minimum the equal of every big tech company?
we agree that at maximum its the equal of every tech company
In what other metrics do other big tech companies exceed it, causing them to be ranked higher?
itt: confirmation bias
"We believe in putting this power in people’s hands to direct it towards what they value in their own lives"
Either Zuckerberg has drunk his own Kool Aid, or he is cynically lying to everyone, but neither is a good look.
While I think LLMs are not the pathway to AGI, this bubble narrative appears to be a concerted propaganda campaign intended to get people to sell, and it all started with Altman, the guy who was responsible for always pumping up the bubble. I don't know who else is behind this, but the Telegraph appears to be a major outlet of these stories. Just today alone:
https://www.telegraph.co.uk/business/2025/08/20/ai-report-tr... https://www.telegraph.co.uk/business/2025/08/21/we-may-be-fa... https://www.telegraph.co.uk/business/2025/08/21/zuckerberg-f...
Other media outlets are also making a massive push of this narrative. If they get their way, they may actually cause a massive selloff, letting everyone who profited from the giant bubble they created buy everything up cheap.
If there is a path to AGI then ROI is going to be enormous literally regardless of how much was invested. hopefully this is another bubble. i would really rather not have my lifes work vaporized by the singularitt
I think I have said it before here (and in real life too) that AI is just another bubble, let alone AGI which is a complete joke, and all I got is angry faces and responses. Tech always had bubbles and early adopters get the biggest slice, and try as much as possible to keep it alive later to maximize that cut. By the time the average person is aware of it and is talking about it, it's over already. Previous tech bubbles: internet, search engines, content makers, smartphones, cybersecurity, blockchain and crypto, and now generative AI. By the way, AI was never new and anyone in the field knows this. ML was already part of some tech before generative AI kicked in.
Glad I personally never jumped on the hype and still focused on what I think is the big thing, but until I get enough funds to be the first in the market, I will keep it low.
I don’t think it’s entirely a bubble. Definitely this is revolutionary technology on the scale of going to the moon. It will fundamentally change humanity.
But while the technology is revolutionary the ideas and capability behind building these things aren’t that complicated.
Paying a guy millions doesn’t mean shit. So what mark zuckerberg was doing was dumb.
> on the scale of going to the moon
Of all the examples of things that actually had an impact I would pick this one last... Steam engine, internet, personal computers, radios, GPS, &c. but going to the moon ? The thing we did a few times and stopped doing once we won the ussr vs usa dick contest ?
Impact is irrelevant. We aren’t sure about the impact of AI yet. But the technology is revolutionary. Thus for the example I picked something thats revolutionary but the impact is not as clear.
> Pays 100 million for people, wonders if theres a bubble
The most likely explanation I can think of are drugs.
Offering 1B dollar salaries and then backtracking, it's like when that addict friend calls you with a super cool idea at 11pm and then 5 days later they regret it.
Also rejecting a 1B salary? Drugs, it isn't unheard of in Silicon Valley.
How did this get pushed of the front page with over 100 points in less then an hour?
YC does not like that kind of articles?
THANK YOU
... the bubble that he created? After he threw $100,000,000,000 into a VR bubble mostly of his making? What a fucking jackass manchild.
BTW: Meta specially denies that the reason is bubble fears and they provide alternate explanation in the article.
Better title:
Meta freezes AI hiring due to some basic organizational reasons.
They would deny bubble fears even if leaked emails proved that it was the only thing they talked about.
Would anyone seriously take Meta's or any megacorps statements on face value ?
A hiring freeze is not something you do because you are planning well.
Hey were only get huge options / stock based on the growth of the business.
Plus they will of had a vesting schedule
Besides the point that it was mental but the dude wanted the best and was throwing money at the problem.