OpenAI plans to become a for-profit business — here’s what that means for the AI company | Mashable.

OpenAI plans to become a for-profit business — here’s what that means for the AI company

In addition, CTO Mira Murati and two other executives also just announced that they're leaving the company.
By Matthews Martins on 
OpenAI is looking to take control from its non-profit in order to focus on the for-profit side of the organization. Credit: OLIVIER DOULIERY/AFP via Getty Images

Comments

  1. There's no point. Samsung, Google, Apple, Sony, aswell as a handful of 3rd party mobile developers and websites all have there own AI models. Unless it's a robot climbing out my phone that can clean my dishes, then there isn't anything unique that you can do with it.

    ReplyDelete
  2. Anyone still remember how they promised never be for-profit and never work for military? Funny, yeah.

    ReplyDelete
  3. Seems like they need to change their name from OpenAI to $$$$AI

    ReplyDelete
  4. Replies
    1. CONGRATULATIONS
      name has been selected to receive a ($30,000) donation from the ( Elon musk Foundation 🇺🇸please contact us to follow the procedure.
      If you are interested, at
      these prices % click on this link for more information, thank you:👇👇👇👇👇👇
      My real fans only,click
      The link below ❤️🪐
      👇👇👇👇👇👇👇

      https://t.me/ELONMUSK_3001

      Delete
  5. #ElonMosquito vs #littlefinger

    ReplyDelete
  6. CONGRATULATIONS
    name has been selected to receive a ($30,000) donation from the ( Elon musk Foundation 🇺🇸please contact us to follow the procedure.
    If you are interested, at
    these prices % click on this link for more information, thank you:👇👇👇👇👇👇
    My real fans only,click
    The link below ❤️🪐
    👇👇👇👇👇👇👇
    https://t.me/ELONMUSK_3001

    ReplyDelete
  7. Our only hope is that someone else will solve the alignment problem and achieve ASI first.

    It's obvious that Altman has no intention of proceeding with safety as the highest priority.

    ReplyDelete
  8. Power corrupts, ultimate power corrupts ultimately. The panacea of peace and prosperity ai could bring, if only aligned with corporate interests will widen disparity, and allow the “truth” it provides to be controlled and manipulated.

    ReplyDelete
  9. Sam Altman can’t use the whole “I don’t even have equity in this company” excuse anymore.

    ReplyDelete
    Replies
    1. I know he loves saying that 😂😂

      Delete
    2. He loves being a billionaire more

      Delete
    3. There was always an undercurrent of "because if this is successful money won't have any meaning" this move feels very bearish tbh. If he saw AGI on the horizon cashing out wouldn't make much sense.

      Delete
    4. Very true, not to mention all the great talent leaving the company. If they were really on to AGI who wouldn’t want to be a part of that??

      Delete
    5. I’m not sure that leaving implies failure - going to mistral, meta, or anthropic puts you at roughly the same spot progress wise with less toxic leadership.

      Delete
    6. Or way more cash and authority. Lets not forget the reason that most people switch jobs - payrise.

      Delete
    7. Yeah or money and power will still matter a lot in the age of AGI and he wants to control it all.

      Delete
    8. They're gonna try hard to make sure money ALWAYS has meaning, because it's what currently gives them power. If you had all the money, why would you ever do anything that jeopardizes that?

      Delete
    9. or he feels AGI and then Superintelligence is a given, but he saw that keeping the a trick secret does not work, proven by other companies catching up (or being ahead for a time like anthropic). So he cashes out now, to enjoy the 5-15 years till money does not matter anymore.

      Delete
    10. Now he can use equity in this company.

      Delete
  10. So what happens to the folks that donated money when it was a non profit. Seems kinda weird that they could develop the tech has a non profit then turn full for profit when they got product market fit

    ReplyDelete
    Replies
    1. Same that happened with volunteers who made full courses for Duolingo for free... guess what.

      Delete
    2. They get fucked. Regulators will need to step in for the simple reason that other companies will now try and copy this 'not for profit' model. Tax avoidance etc etc

      Delete
    3. Like I understood having a for profit entity below and license from the non profit and the nonprofit having equity but now yeah seems like a scan

      Delete
    4. What about it was tax avoidance? The corporate entity pays taxes

      Delete
    5. Needs to make profit to pay tax…or can roll forward tax losses

      Delete
    6. That’s not how this capitalist game works.

      Delete
    7. The side of the company that released gpt hasn’t been a nonprofit since 2019. The 501(c)(3) side still exists.

      Delete
  11. So can we all admit that Ilya and the old board members were absolutely right to want to get rid of Altman, and that the OpenAI employees who threatened to leave to join Microsoft and posted hearts on Twitter like lemmings were dupes?

    ReplyDelete
    Replies
    1. They weren't dupes, they were complicit. They voted against the coup because they wanted that sweet, juicy Microsoft money.

      Delete
    2. Probably some combination of the two.

      Delete
    3. It’s funny to me that Ilya went to start a for-profit company, but this sub holds him up as this saint.

      Delete
    4. For profit is how you get funding. OpenAI was a fully unique unicorn in getting funding without a profit motive or profit promise (or even capped profit promise). Ilya isn’t calling his new company totallyopensourcefreesoftwarefortheworld and isn’t telling everyone he has no profit incentive in the company when it’s not fucking true. You can sell whatever you want, your morals show in how you represent it to the world.

      Delete
    5. Yeah good point, let’s just support founders who never had any intention to be anything but greedy.

      OoenAI NEVER had open source models. OpenAI NEVER released a product as a 501(c)(3)

      OoenAI was founded as an R&D lab. Yes , nonprofit. And that still exists. The nonprofit side is still a separate thing.

      And everybody is like “Elon was right all along” (and in the meantime Elon is just fine tuning an open source model and x.ai is totally for profit)

      OoenAI was NEVER.a unicorn until they created their for profit arm of the organization in 2019. The 501(c)(3) was never a fundraising unicorn.

      They’re all capitalists, and so are you. At what point was Sam less “evil” than Anthropic? At what point did Elon go off and start a nonprofit R&D lab? Where’s the outrage there?

      They’re all the same

      Delete
    6. Because he didn’t lie about it. I really dislike liars.

      Delete
    7. Good news is talent is leaving and those same lemmings will be leaving too.

      Delete
    8. I somehow don’t think that OpenAI will struggle to attract talent

      Delete
    9. They are. I’m in the industry and know people on the inside. Their compensation structure is nonsensical and their competitors are catching up fast. The sam drama and Sam itself isn’t helping the situation.

      Delete
    10. In theory yes but with any start up the details are what matters

      Google “OpenAI ppu”or https://www.reddit.com/r/startups/s/fBb6cjHb0D

      Delete
    11. What makes it nonsensical?

      Delete
    12. Can't they pay a bunch given their valuation? That's what I would intuitively think. In what way is their comp structure nonsensical?

      Delete
    13. Dude, a company's valuation is not what money they have. It's what money their investors had if they sold. None of that is money you can use for salaries.

      Delete
    14. It is for paying staff in equity which is what most companies do

      Delete
    15. Except it's the opposit. Most grants aren't based on a percent of a company rather it's a fixed dollar amount. So you don't want sky high valuations. There's less room to go up and more room for it to crater if the employee was to hold. Its equity was far more appealing 2 years ago than today.

      Delete

    16. I would say it’s a percent of a company when it’s a small company or you have a larger stake but as an employee in the later stages it is a dollar amount typically. Look at FAANG offers their stock grant is based on a dollar amount and how many stocks it’s equal to is evaluated either at the start of the stock grant or yearly.

      In this case you want the company to be at a higher valuation as it means it’s more likely the company will be successful and eventually do an initial public offering so you can eventually sell your stock on the public stock market vs a private sale

      Delete
    17. Not the opposite. I just meant that equity is usually part of comp at tech firms / start ups. A high valuation indicates that that portion of an individuals comp at the company is less risky (but as you said also less room for growth).

      I’d argue it’s still likely a decent investment because any time they do a raise there’s large private investors lining up at the door, but you’re right two years ago it would have been even better (but also more risky).

      Delete
    18. Google “OpenAI ppu”. Their comp structure is uniquely bad and that says a lot given startup land. When they do grants it’s not equity but this quasi profit sharing mechanism that looks like it has a lot of problems with it.

      https://www.reddit.com/r/startups/s/fBb6cjHb0D

      Delete
    19. That’s reason this structure is changing though, they’ll now be able to offer equity like everyone else. And I’m a headhunter for high level AI/ML talent, and they absolutely have talent drawn to them, and will now have even more.

      Delete
    20. Correct but now they need to redo their entire comp structure, figure out how to make existing employees whole. It’s a nightmare. So people are holding off on joining until it’s all settled.

      It’s a huge problem for their recruitment and at a time where speed and growth is needed to win.

      Their corporate structure and compensation model is an existential threat at this point.

      Delete
    21. oh for sure, god damn nightmare... it's like the entire tech landscape in 2021, nearly all of my clients in 2021 had to pay more for talent than they ever had before, and were in a situation where they had to figure out how they were going to hire this sr. dev that needs more money than the lead dev they're reporting to. So you either have to underpay relative to the overall market or give everybody a raise, which also isn't sustainable.

      That problem has definitely sorted itself out naturally in 2024, sadly.

      Delete
  12. Oh well now it makes sense why their CTO resigned today. Ha ha

    ReplyDelete
    Replies
    1. Yeah cause who wants millions of dollars in equity that she could now get lol?? She was pushed out. “Oh now I can have a normal comp package and literally double my net worth overnight? Awesome!! But… I think I’ll choose to leave now. Not in 6 months, now”

      😂😂😂😂

      Delete
    2. ...or she believed in the nonprofit mission and is disappointed in the move to for-profit.

      There are people who are not primarily motivated by money.

      Delete
    3. She left to start a company bro… same with Ilya both are going or have started their own for profit company, likely because they know they can get billion dollar valuations from an idea alone given their history/brand name

      Delete
    4. I don’t believe this for a second and neither should you.

      Delete
    5. Additionally -- how much do you want to bet her next job is NOT at a nonprofit? If she takes a job at a nonprofit I will completely agree that are right. But you're not. Ilya gets put in this special category in this sub. As he goes to start a for-profit company. That is not run by a nonprofit board.

      Delete
    6. They have been a for-profit company since 2019

      Delete
  13. Sam "I don't care about money lol" Altman. This is the best we've got, huh? We're so fucked.

    ReplyDelete
    Replies
    1. What’s amazing is you’ll see permanent defenders on here till the day it all goes down.

      Delete
    2. I used to defend him until the whole NDA debacle happened. Now I'm just disappointed.

      Delete
    3. I call them samastans.

      Or .. they're from samastan

      Delete
    4. "oh no the venture capitalist that worked in a ruthless sector has done something that a venture capitalist would do"

      I'm not mocking you, I'm mocking who really expected this ceos to be some sort of enlightened human being. I'm not saying it was predictable, I'm saying it's not that surprising to me

      Delete
  14. That means Microsoft is now controlling it. They had 49 %, the non-profit held 51 %, now they sell/give away part of that while Microsoft likely keeps investing to get at least 2 % more. Would only be 3 billion to take control of a 150 billion company with a 13 billion investment total.

    ReplyDelete
    Replies
    1. They’re going to go public, and also, creating new shares is a thing lol.

      Delete
    2. The nonprofit holds 100% of equity.

      Microsoft at one point held 49% of PPUs. But there are many more investors now so it got diluted.

      Delete
  15. The only way this should be legal is by forming a new corporation that acquires the for profit entity at current valuation. You’d obviously have to pencil out the value of vested and purchased PPUs vis a vis equity and what portion is owned by other investors. Then add a healthy acquisition premium on top.

    Any donor who donated should have those donations convert into PPUs at contemporaneous valuation, which will dilute other PPU owners.

    And the nonprofit ends up with the proceeds.

    Just rewriting a nonprofit into a for profit corporation and giving yourself a huge amount of equity in it should be flat out illegal. I guess nobody has ever tried to do this. But what is to stop any other non profit from doing the same and enriching themselves.

    ReplyDelete
    Replies
    1. I thought the for profit entity existed for 5 years now? Is there anything about selling or transferring the ownership from the non profit or creating a new for profit entity?

      Delete
    2. The issue as I assume it will happen is that OpenAI the nonprofit that owns the for profit 100% will get nothing in return. It’s essentially giving it away for free.

      Delete
  16. I'm in favor of acceleration and think that the x-risk is way overblown.

    This isn't the greatest move (since large capital is one of the biggest threats to the world) but if it gets us to AGI faster then it is worth it.

    ReplyDelete
    Replies
    1. Yes, but most of that talent was heavily in favor of deceleration. I don't know what the ultimate effect will be. If they get the AI a little stronger it won't matter how smart the humans are, just how willing they are to pull the plug.

      Delete

    2. I still think its gonna be tough to replace that talent. Sure these guys borrowed from other papers but they also pioneered a lot of advances in LLMs and seeing them break up just makes me question the future of the company.

      Not that its going anywhere, though. They have tons of investors lining up in the company but it also makes me wonder if they really know what they're getting themselves into or if they know something we don't that makes them feel confident enough to value the company that much.

      Delete
    3. They were in favor of deceleration for good reason tho. I don't understand how you think everyone trying to take a step back and be cautious of the direction we are heading jumping ship is a good thing

      Delete
    4. This is a great position to have because nobody will be alive to tell you that you were wrong

      Delete
    5. You mean x-risk is real but low, 1% or lower, or that x-risk is outside the realm of what is feasible, like Luxembourg conquering the whole world in a week.

      Delete
    6. It is real but much lower than those who focus on it believe. There are also huge potential benefits that outweigh the x-risk.

      Delete
    7. How large would the x-risk have to be, roughly in percentage, you can use a range if you want, before you would think that the potential benefits would not outweigh the risk?

      Delete

    8. The human species is very adaptable so actual extinction is going to be very hard. Risks that are near extinction level, like nuclear war or a man made virus are more plausible.

      The thing is, we've been living with these rusks for quite some time. We have lived under the shape of nuclear war for half a century.

      In another way of looking at it, every human is a world unto themselves. If I die then my everything is gone. Nothing that I leave behind can possibly matter to me because I can't ever benefit from it, even emotionally. Yet we all take risks that will kill us constantly.

      The greatest x-risks are those from outside the earth, asteroid strikes, cosmic ray bursts, etc. The fact that we are all clumped together in one place is the biggest rush there is.

      Finally, I don't necessarily believe that the end of homosapiens would be "extinction" of our civilization. Transhumanism will come about and I am perfectly comfortable seeing intelligent AI as our children as much as future homosapiens are our children.

      The x-risk from AI that I care and and matches with everyone else is the risk that it becomes a tyrant. The most likely way to do that is if it is owned and developed solely by tyrannical entities, like the CCP.

      My answer to how big the x-risk needs to be to pause is basically that all of the x-risks I see are made worse by pausing and so the larger the risk the more we should accelerate. If one could both show that these other risks are very low while showing that the classical paperclip or Terminator risks are even real then I would change my mind (one should always be open to evidence) but evening so far points away from those possibilities.

      Delete
    9. I do agree with your general outlook that humanity is incredibly resilient and the x risk, as extinction risk, is from world ending asteroids or other extreme events from space and that spreading out on other planets is desirable for the long term survival of humanity.

      What you describe as tyrant AI is a real concern which is generally described as S-risk. I think it is important to keep both in mind but have them as separate concepts, there can be trade-offs between them, but both can be avoided by an international agreement to stop pursuing ever more powerful A.I.

      The general estimate for humanity ending space phenomena is roughly 0.000001% per year, I'd be fine with a similar yearly risk from AI assuming it helped us reducing that risk, spreading out into other planets or solar systems.

      As for your general point about everyone being a world unto themselves, I share this sentiment. I'm not a romantic about people dying from aging, sickness and accidents to make space for new life, I much rather have everyone living keep living for as long as they wanted and less babies than everyone having to die and more babies.

      I do think it is not permissible (per my values) to significantly increase x-risk (extinction) for the benefit of those alive today at the cost of everyone in the future, and it is doubly not permissible to increase s-risk.

      Of course that is just my view, and there is nothing wrong with people who think differently. I do respect those who say that they are willing to risk humanity to save their own life from certain death, especially if that future is better for everyone. Especially if they view life on earth as having negative utility now.

      Delete
    10. Future people aren't real, at least not yet. We should do our best to leave them a good world but that shouldn't come at the cost of significant harm to present humans.

      A big issue though is that the steps necessary to stop AI guarantee both a short term tyrannical world and a long term total extinction. Since AI is just computer code you will need to roll back progress on computers, prevent any more from happening in the future, and prevent people from writing code to build AIs. Those who fear x-risk the most think that we are close to self improving AI, so we need to ban all future research on algorithmic improvements.

      That level of lock down will absolutely devastate all other forms of intellectual advancement (you can use those computers we design space suits with to build an AI) so that means we will stay a single planet species. There is a 100% chance that all life on earth will be destroyed.

      The core problem with the x-risk from AI argument is that it is definitionally unsolvable. Since this AI will be smarter than us, and will trick us, there is no way to gain perfect confidence that we can be safe. Thus the only logical end goal of hard "we must prevent x-risk" AI safety is the destruction of humanity, first by breaking our soul and then later by breaking our bodies.

      AI safety research that recognizes we can never completely eliminate the possibility of risk and lets people do the best they can is useful and should continue. The problem is that we see situations where OpenAI creates GPT-2, and then ChatGPT, and now this vice assistant and each time there are safety teams in the company screaming that it isn't safe and then quitting when it is released. For someone who truly believes in x-risk and the e/acc idea that the future is infinitely more morally relevant than the present, any progress on AI is immoral.

      This is why I expect nothing to ever come out of Ilya's new company. It is also why I am fine with seeing OpenAI losing the safety minded people.

      Delete

    11. I think our viewpoint mostly diverge around the future of humanity, I do think the concerns of the future humans weight heavy when it comes to extinction risk.

      If I were primarily concerned about the current humans alive, and s-risk was not a concern, I would be willing to for example vote for ASI this year with a 10% risk of extinction, even thought the probability of extinction went down by 0.01% every year we waited, as that would save almost everyone alive today if it goes well and nobody in the future would be harmed if it went badly. I would of course also be biased because I have a ~10% chance of dying for any given year, but even if that was 0.1% I still think I'd vote the same. Especially if I believe other X-risks (extinction) to be roughly comparable and the post A.I world would have closer to 0% risk.

      I'm maybe naive in that I think we can keep our current technology and even expand narrow A.I. But I'd personally take regression in technology, fixed at pre-industrial levels indefinitely, at a much lower population, as bad as that would be, over rolling the dice on ASI today, considering what I believe to be the risk.

      Overall I think humanity can co-ordinate if we believe something to be important, we managed to both advance medicine and cloning tech without ever cloning any humans, to the point where people are cloning their pets. Because we collectively agreed on it.

      Thanks for sharing your perspective, I think it makes a lot of sense considering the starting premises.

      Delete
    12. Absolutely! Acceleration and uncensorship are crucial. We can’t ignore the reality anymore. Just like any past change, it will have both advantages and disadvantages for our society.

      Delete

  17. I mean he’s a Y-Comb guy, we all knew this was going to happen from square one. What governments need to ensure is the wide availability of AI tech.

    As long as everyone theorically can have access to comparable technology it’s palatable but if there’s a situation where the persons and entities who can pay the most have access to something not even in the same stratosphere as everyone else that AI becomes an utter mess long before we get to chin stroking and talking about AGI.

    ReplyDelete
    Replies
    1. "we all knew this was going to happen from square one"

      We absolutely did not.

      OpenAI’s non-profit structure was pretty much uncharted waters with not much precedent, and we (meaning Redditors, the market in general, most of their investors, …) literally had no idea how it would shake out. Especially with all the board drama and key people leaving drama along the way. It’s easy to say “we all knew all along!” after stuff actually happens, but the truth is we very much didn’t know.

      Delete
    2. This is a good snapshot of how wildly the sentiment swung at the time.

      https://manifold.markets/NealShrestha58d3/sam-altman-will-return-to-openai-by?play=true

      EDIT: Nobody had any idea, it seemed extremely unlikely that Sam Altman would return in any capacity at all, let alone be CEO again.

      Delete
    3. You don’t hire the Y-Comb guy and then get to be surprised when he’s in it to make money.

      Delete
  18. I kind of miss the days when the most consequential thing about OpenAI was their CartPole tutorial.

    ReplyDelete
    Replies
    1. I still remember the good ol’ days of trying to land the 2D rover.

      Delete

  19. Sucks, though I understand the temptation to make it a profit venture. I am certain that there is much to exploit. I don't think that this direction is great for humanity as a whole. But, we have a lot of important things under some or fully under private control with varying levels of success and reliability. It's not necessarily a recipe for immediate failure and doom. However, this company is representative of a major shift in technological norms and labor capability and so having it be embedded as another profit venture among the many other things which perhaps would be better off socialized or nationalized instead.

    I hope that the products remain accessible to the general public.

    ReplyDelete
    Replies
    1. Remain accessible to the general public…? Why would they remove access to the general public? Cause they hate money…??

      Delete
  20. That didn't take too long.

    I can't get past how many times they've been hacked. Incompetence starts at the top. CTO just quit or was walked out, so holding my breath for next person actually knows how to do their job.

    Profit is good and all but letting foreign actors have full access to code, weights, maybe even building backdoors etc is worse.

    Llama for the win. I hope they release an "advanced voice model" soon.

    ReplyDelete
  21. "not consistently candid" 🤷‍♂️

    ReplyDelete
  22. "You could parachute him into an island full of cannibals and come back in five years and he’d be the king.” - Paul Graham on Sam Altman

    Turns out it took him less than 9 months

    ReplyDelete
  23. Sam Altman basically following Sam Bankman Fried's playbook of just lying and using "altruism", or in this case a "non profit" as a way to stoke the idealism of the naive engineers who built the place. Diabolical and not to be trusted again.

    ReplyDelete
  24. Provocative question: Shouldn’t most of the stock be set aside to pay the UBI for the workers permanently replaced by OpenAI’s AI?

    ReplyDelete
  25. Biggest clickbait of the century

    ReplyDelete
  26. How will they explain about fair use of data on entire internet.

    ReplyDelete
  27. Will he be more rich than Elmo?

    ReplyDelete
  28. poweeeerrrrrr unlimiteddddd powwwwaahhhhh

    ReplyDelete
  29. aww OpenAI was getting pretty good with how they had removed a lot of the prudish moralizing from their model (at least in API), it's going to back to being worthless like all the other ones are

    ReplyDelete

Post a Comment

Stay informed!