AI Doomers, EA, E/ACC and Decels
Tech political spectrums. Where do you think you belong? AI risk or AI hype?
I always wanted to write about left, right, liberal, and political spectrums. But I procrastinated on it very much. I started several times and then stopped. My mind kept pushing me out of it. I tend to follow this type of unconscious instinct.
I think what I’m going to write is also correlated with the traditional political spectrum.
This might not be for everyone, especially non-tech people, but if you’re up for it, do read it and let me know your thoughts.
For the past few months, I noticed “e/acc” as a suffix in a lot of people's handles on multiple social media. I wasn't given much attention to this at the beginning, but one day (especially after watching After Life, a sentence from it) I searched for it. That’s when I was introduced to these concepts. The more I tried to learn about this I realized one thing - this is the tech politics classifications and we all belong somewhere in this.
I recently watched the movie Oppenheimer and this movie is the best example. I mean in my point I can see the movie travels through all these spectrums.
Some of you may not relate these to the videos I shared and it is understandable. Theories provide a framework for understanding as it is, but they are not always directly applicable to individual experiences, right?
I attached a scene for each. Video copyright to the original owners, I’m using it here for only illustrative purposes.
EA (Effective altruism)
The social movement focuses on using evidence and reason to help others.
Decels (Decelerationists)
People who advocate that the development of AGI should be slowed down.
E/ACC (Effective accelerationism)
Supporters for the rapid and uncontrolled development of technology even if it poses significant risks to humanity.
AI doomers
People who believe the development of AGI is an existential threat to humanity.
As you already know, atomic bombs are nuclear weapons, but the use of controlled nuclear reactions gives us clean energy. The theory behind these are same, right?
When it comes to any revolutionary technology It’s all about finding that right balance, trusting it with the right set of people, and the most friendly human-centered use cases.
But do you really think politicians really know the depth of it? Especially when it is not about a switch turning off, it's much much more than that.
Where do you position yourself?
Some might ask here, why positioning at all?
I think it’s important to realize (only for ourselves not for the public judgments) where we stand in each revolution and issue happening.
For every action and reaction, we have a response like either “I don't care”, or supporting it or opposing it. And when a lot of opinions like this emerge, our position can be identified in the political spectrum. Whatever it is, if it’s not for the good, then people may start to judge you.
What I’m talking here is about AGI1.
Many people lowball this as “people are panicking because they saw a few fictional movies about AI overloads”. Well, it’s not just that. It’s all about perspective, how we see things is entirely up to us.
To understand this, first, we have to know our emotions.
Fear is an emotion, right? People have the right to fear about something even if they haven't fully understood it yet. They fear for their safety, jobs, money, assets, relationships, etc. Fear is just only one emotion. We have a lot of simple and complex emotions. And this is what we are planning to give to machines and call consciousness.
AGI Realism
It will probably inherit all the complex things from humans. Saying one and acting totally against is one of the things that makes us humans. I don't think you can just filter out bad qualities if they achieve anywhere near consciousness. How does a child react when we tell them not to? Huh? And what happens when they find it on their own?
Think about it - of course, this scenario is currently fictional but multi-layered. There’s no definite answer. What we can do is test, test, test, and test forever.
Hitler and Ann Frank - both were humans but totally different. This is exactly my point. Humans are complicated, so going at full speed with less than minimum to nothing safety measures and AI alignment is like betting against ourselves.
People will lie, and so will AGI.
People will try to manipulate, and so will AGI.
People will spread love, and so will AGI.
People might make costly and deadly mistakes, and so will AGI.
People can talk philanthropy and act as a dictator, and so will AGI.
Here some might say, I mostly pointed out negative things and only one pro. Yes, you are right and that was intentional. If there's even a one percent chance that this might become our enemy we have to take it as an absolute certainty.
AGI Optimism
With the help of AGI, we can automate many repetitive and time-consuming tasks.
Customer support is going to be changed forever. I usually don’t like robotic responses and words, but if trained well, you know the result.
As humans, most of us need a few seconds to a few months (maybe years!) to make a decision about something by considering all the scenarios or even overthinking. That is and can be changed with truly scenario-optimized agents.
By privacy-focused learning about a specific group of customers, businesses can intelligently suggest what they might need next. It can even reduce overwhelming.
Just think about the privacy-focused personal assistant we have on our phones and other devices. How fast we can do things that matter.
Consider the speed at manufacturing units!
And many more!!!
I believe that
AI can help in a lot of ways and yes a lot of jobs are going to vanish soon. But I don't consider it as an existential threat. We are in our current state through evolution. But some may consider this as a threat maybe it’s because they don’t want to evolve. No issues there, but the world is running only one concept - survival of the fittest!
But this is where “humanity” comes in. We tend to feel empathy for others’ struggles and misfortunes, right? So when seeing others struggling because of the AI revolution people might call for humanity first.others
And this is where people oppose and say this is not about you or me, it’s about all mankind. This is where I disagree, I’m all up for acceleration…But
We need clarity for these…
At what cost?
What’s the real intention?
Any hidden agenda and Is there anything hidden from the public?
How transparent are we about this mission?
What happened from Nov 17 to 22 and why?
I’m always bullish on decentralized platforms. I believe it should happen here too. Transparent and open source with no monopolies. But I don't think anyone cares about it anymore.
A few things to consider
Where are China, Russia, and North Korea in AI development? We need a clear mission statement from them. This is not a project that countries need to do in silos.
In theory or even imagination, this is the powerful technology that humans are going to face. As a society, we only move forward, so I don't think anyone will slow down on this. But we have a lot of things to consider.
One thing to consider is, there’s not going to be only just one AGI. There will be several AGIs that works in different capacity. Because of the nature of running this, by burning cash for each command, billionaires are the one who is going to benefit most.
How do we learn things, like a skill or some interesting topic?
We read, watch, feel, or hear about it, right? And for this, we need time. More than a few seconds. Then we will improvise over time from this learning and we call it os our growth or experience. This all takes time.
Now for another example, a lot of the school subjects that we used to think were hard are now a lot easier, right? This is the influence of our self-learning.
This is exactly where AGI is going to shine. It might only need a few seconds to learn, analyze and it from every possible angle. More importantly, it can self-learn from one topic to its related topics and its related topic (like a chain reaction) and master all that. It can be used for good and at the same time, you know… We are not living in a utopia.
Authoritarian countries building and mastering this technology is really a concern, especially when the internet is connected to every household now with CCTV cameras and all. Who is tracking us, who is learning us, who is controlling us???
These types of questions are really important concerns of privacy expecting citizens. There is no off-the-grid anymore.
Stock market manipulation and predictions (for the rich only).
How many thoughts you can maintain in your mind at the same time? Is it 2 or 3 or 5 or 10? AGI can do x times than that! Finding x is only by testing regularly.
Think about how it will be used for spreading misinformation, election hijacking, conspiring, or even character assassination. Extremely vulnerable and right tool getting into the wrong hands is almost equal to doomsday.
Watch this little video “What's The Deal With Large Language Models?”
I’m really concerned about the fake news, cyber attacks, and automated AI weapons as if we don’t have enough problems to deal with.
It can start a blog or website, change its timestamp, and even totally create a person that didn't exist and make it look like this person lived a life - all this in a few seconds. Since it’s capability is much more than that…
From Ilya’s words
I have a lot of thoughts developing about AGI after I watched a documentary of Ilya: the AI scientist shaping the world on “The Guardian” YouTube channel.
Especially this one.
Ilya; But my position is that the probability that AGI could happen soon, is high enough that we should take it seriously.
The beliefs and desires of the first AGIs will be extremely important, and so it's important to program them correctly. I think that if this is not done, then the nature of the evolution of natural selection, favors those systems prioritize their own survival above all else.
It's not that it's going to actively hate humans and want to harm them. But it is going to be too powerful, and I think a good analogy would be the way humans treat animals. It's not we hate animals I think humans love animals and have a lot of affection for them but when the time comes to build a highway between two cities we are not asking the animals for permission, we just do it because it's important for us and I think by default that's the kind of relationship that's going to be between us and AGI’s which are truly autonomous and operating on their own behalf.
Many machine learning experts, people who are very knowledgeable, and very experienced, have a lot of skepticism about AGI. About when it could happen and about whether it could happen at all. Right now this is something that just not that many people have realized yet, that the speed of computers for neural networks for AI are going to become maybe 100,000 times faster in a small number of years.
If you have an arms race dynamics between multiple teams trying to build the AGI first, they will have less time make sure that the AGI that they will build will care deeply for humans.
Because the way I imagine, it is that there is an avalanche, like there is an avalanche of AGI development imagine this huge unstoppable force!
And I think it's pretty likely the entire surface of the Earth will be covered with solar panels and data centers, given these kinds of concerns, it will be important that AGI somehow builds as a cooperation between multiple countries.
The future is going to be good for the AI regardless, would be nice if it were good for humans as well!
Why humans run the world | Yuval Noah Harari
The below conversation is from a TED talk.
YNH; In the Industrial Revolution, we saw the creation of a new class of the urban proletariat, and much of the political and social history of the last 200 years involved what to do with this class and the new problems and opportunities. Now we see the creation of a new massive class of useless people as computers become better and better in more and more fields, there is a distinct possibility that computers will outperform us in most tasks and will make humans redundant, and then the big political and economic question of the 21st century will be, what do we need humans for or at least what do we need so many humans for?
Q; Do you have an answer in the book?
YNH; At present the best guess we have is keep them happy with drugs and computer games, but this doesn't sound like a very appealing future!
Q; Okay, so you're basically saying in the book, and now that for all the discussion about, you know, the growing evidence of significant economic inequality, we are just kind of at the beginning of the process?
YNH; Again it's not a prophecy. It's seeing all kinds of possibilities before us, one possibility is this creation of a new massive class of useless people, another possibility is the division of humankind into different biological costs with the rich being upgraded into virtual gods, and the poor being degraded to this level of of useless people.
The last thing we need is a cult
We have a lot of cults, especially billionaires, manipulators, abusers, extremists, religious fanatics, etc.
Now Sam is back, looks like there’s another tech cult has been formed.
The OpenAI drama ended for now I think. Things have happened so fast, it's like we're trying to catch up to a runaway train!
Paul Graham described Sam as below.
“Sam Altman has it. You could parachute him into an island full of cannibals and come back in 5 years and he'd be the king. If you're Sam Altman, you don't have to be profitable to convey to investors that you'll succeed with or without them. (He wasn't, and he did.) Not everyone has Sam's deal-making ability. I myself don't. But if you don't, you can let the numbers speak for you.”
And when the drama started he tweeted this!
He received public support along with his team’s support. But also a lot of allegations. I think he needs to be transparent about these. He’s basically OpenAI now!
This is a short mix I did when Satya welcomed him to Microsoft. This mainly points out why the board is designed in a complicated way.
Satya’s pause and saying “We love you guys” doesn't add up now. He did speak in several interviews as a damage control. People even praised him for playing 4D chess. But in all these interviews Satya repeatedly tried to mention one thing, OpenAI is different, Microsoft has its own team, etc, to convince the market and consumers.
I have my doubts about the things that happened. Sam specifically said in the above video, that the board can fire me. But why fired is the question that still doesn't have a clear answer. A lot of speculations, allegations, and theories are on the air. But no clarity.
OpenAI's valuation is estimated to be between $80 billion and $90 billion before Nov 17. But when the board fired, everything went into chaos. 738 of 778 employees have signed the employee letter to the board.
Sam gets a lot of praise from known people including Brian and Dylan.
By the look of it, he’s not fireable now. He’s above the board. But there’s also one more perspective. The board did what it could so that full attention is now about transparency and safety measures.
Even if Sam’s popularity increases, he has pressure to navigate carefully. Maybe that was the board’s intention. The person (Ilya) who was one of the people behind ChatGPT, may not act out of the blue I guess especially when people praise him for his work.
Even if sometimes it feels like a coup, I think a staged and controlled demolition + refocus were the agenda. If that were the case, they have won it, otherwise, it’s a loss of reputation by all means.
Sam repeatedly says that, no equity, no salary and he is doing it because he loves it. It’s understandable when it is only one time. But somehow, now it feels like social engineering.
Maybe he has enough money, maybe he is leveraging his position and achievements, but reading all this with all other allegations and all - something doesn't add up. Brian said this while the acquisitions started to come.
Anyway, let’s wait and see. I hope he will clear it all soon.
Computing, energy, cost, etc
I downloaded an app the other day. And with this app, I can Run LLMs on my laptop offline. I didn't quite understand it first. But I tried it anyway. I asked a few questions and I’m getting answers - but it is very slow. Then I did a settings tweak that improved the speed and I could hear the graphics card fan running at full speed!
Then I compared this with ChatGPT and Bard responses, not results-wise, but in terms of the speed. It’s superfast right? And that’s when my mind realized the gravity of computing and the power it required to run AI, that too at this scale!
And that’s when I clearly understood how AI companies’ token-based pricing works. You can check here at Tokenizer.
Next time you click that regenerate button and ask silly things, this is what’s happening. I think this is exactly why all these “GPT wrappers” are costly. Think about the work they invested in it for fine-tuning and all. Every click is burning cash.
Ilya: The very first AGIs will be basically very, very large data centers backed with specialized neural network processors working in parallel. Compact, hot, power-hungry package, consuming like, 10m homes' worth of energy. You're going to see dramatically more intelligent systems. And I think it's highly likely that those systems will have completely astronomical impact on society. Will humans actually benefit? And who will benefit, who will not?
Yes, OpenAI needs money. But when the platform is “non-profit”, and then there’s a capped profit…To complicated. Sure it makes sense when they release and speed up commercialisation. They need money and profit.
But after all this drama, OpenAI feels like a false god to me. Their “emoji” responses, Sam’s style of writing everything in small letters2, Sam’s advocacy all over the world - like for some time, the public heard AI, but they understood it as OpenAI, They were slowly building PR, the relationship with Microsoft etc.
But all this can change - when they do their part, which is to come clean!
Let’s see.
A few things
I mean a lot of things! Biggest tech drama ever.
16 Nov: Ilya Sutskever texts Sam Altman to schedule a meeting.
17 Nov: Sam Altman said at a summit in San Francisco - major advances were in sight.
OpenAI announces leadership transition - “he was not consistently candid in his communications with the board”.
Greg Brockman resigns later in the day.
Nov 18: Sam tweets a photo at OpenAI headquarters with a guest badge (first and last!).
Nov 19: Conversations and conspiracy theories popping up left and right. First against Ilya Sutskever and then Adam D'Angelo.
No 20: Satya Nadella welcomes this whole team (Sam again as CEO) to Microsoft (OpenAI is close to poof!) And now that Mira is also gone, OpenAI names former Twitch CEO Emmett Shear as the new interim CEO.
OpenAI board considers merger.
“OpenAI is nothing without its people” campaign started.
Nov 21: Ilya Sutskever expresses regrets openly!
Satya on Kara Swisher’s podcast: Oh, yeah. One thing I’ll be very, very clear on is we are never going to get back into a situation where we get surprised like this ever again.
Nov 22: Sam Altman to return to OpenAI as CEO.
Sam’s comment and Satya’s response and praise.
While all this drama, OpenAI released ChatGPT with voice.
Greg announced he’s returning and they are so back!
Helen Tonor’s response
Nov 23: Ilya’s response
Reuters; OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say
Update
Nov 29: Sam Altman returns as CEO, OpenAI has a new initial board.
Sam: I love and respect Ilya, I think he's a guiding light of the field and a gem of a human being. I harbor zero ill will towards him. While Ilya will no longer serve on the board, we hope to continue our working relationship and are discussing how he can continue his work at OpenAI.
Out of context, but needs focus here: It’s clear to me that the company is in great hands, and I hope this is abundantly clear to everyone - Sam
We clearly made the right choice to partner with Microsoft and I’m excited that our new board will include them as a non-voting observer.
Helen officially resigned from the OpenAI board.
While the OpenAI drama unfolded in the past few days I witnessed a lot of things as an Internet citizen. There were a lot of theories (conspiracies mostly) about each of them.
Some accused Ilya Sutskever of a lot of things, and then it diverted to Adam D'Angelo and his company Poe. Then it shifted back to Sama and Greg - if this is removed, check the archive here. The interesting thing is it is shared by Elon Musk.
Now Elon’s history with OpenAI is not an unknown matter. I don't like Elon for many things, but he questioning this, and I endorse it. Yeah, I understand that he’s also a competitor in this area, but ordinary people will forget this after a few weeks. So the pressure to release “why it happened” is what we need.
He’s the one who sent a car to space in his rocket just to showcase how cheap is now space transportation. But he’s also a person who is worried about AI and its development speed. Why he left OpenAI is still not convincing enough.
Elon talking about Ilya's moral compass and all.
Elon talking about hiring Ilya.
Elon asks Ilya, why this drastic action.
And obviously, he trolled the short-time CEO and Satya.
The interesting thing is that it all happened on Twitter! Even the media gets information from Twitter.
Artificial general intelligence
Like he is trying to convince us that it is human-written. Sure, occasionally people do this when they are too lazy for text formatting and all, but continuously???