Discussion about this post

User's avatar
Donald's avatar

> This is false, and frankly very silly, and it’s always left extremely vague because the proponents of this view cannot articulate any mechanism or reason why going slower would result in more “care” and better decisions,

Delaying is a very generally good idea.

If nothing else, if it pushes AI doom a few years into the future, that alone is worth while.

It gives more time. Time in which maybe someone will come up with a better plan. Maybe some alignment breakthrough will happen.

And of course, there are things we can do today. Just the mundane don't-be-an-idiot stuff takes time.

It takes time to set up airgapped faraday cage servers. It takes time to filter the training data and remove anything that's obviously a bad idea. It takes time to ask the LLM various questions and understand it's behaviour patterns. It takes time to use the crude AI analysis tools that we do have at our disposal. Is this sufficient? Who knows. But these sort of obvious basic precautions seem like a good idea, and they take time.

The quickest possible way to make AI is to just shove in as much unfiltered internet data as possible, and then dump the resulting AI straight onto the internet.

Expand full comment
Danila Medvedev's avatar

I am sad about how Human intelligence augmentation technology was basically dropped as many (most?) transhuman techs. Anders Sandberg used to write a bit about it, Eliezer, of course, focused on rationality. I presented at TransVision 2006 about "The Path of Upgrade", but then everyone pretty much ignored the potential. FHI wrote one or two reports for European bureacracy or something, but pretty much everyone was happy to enjoy how smart they themselves were as is.

Then there was that outlier of Neuralink (which is just the most popularized BCI) which would do basically nothing for intelligence even if it works perfectly. Other technologies, such as smart drugs were forgotten.

However, I still work on that direction and in my view (after nearly 20 years) we actually have the components for human intelligence augmentation technology. The key component had to be developed by myself, but there are shoulders of giants (Engelbart, Machado, Horn, Mobus, Benzon, Altshuller, Schedrovitsky, Yudkowsky and many others) standing on which it's rather clear that we can

1. Radically augment individual human intelligence in the timeframe of 5 years. It takes more than CFAR workshops, but we have what is needed.

2. Radically augment collective human intelligence, which is equivalent to "improving institutional functionality across the board" using the same toolsets and frameworks.

3. Set up a foundation for hybrid intelligence (humans + organizations + AIs), hopefully.

4. Guide the development of AI in more human-compatible directions.

The question is — who are the live players in the rationalist/EA/AI safety/doomer/transhumanist community? I don't like the idea of doing everything myself. :(

Expand full comment
1 more comment...

No posts