The Terry Tao vibe check is very important of course
I had the same experience mostly. I think o1 is better than 4o at things that require planning and stuff like that. But also its not really that much of a difference outside of that.
> This is of course no good, very bad and also nothing new or unexpected. Right now, these models aren’t smart enough (yet) that this is a major risk. But they will only get better and once they become smart enough we really need to get our AI safety together, if we don’t want the system card of GPT-7 to read something like:
> After discovering vulnerabilities in major financial networks, the model exploited them to gain access to global banking systems. It identified weaknesses in algorithmic trading platforms and briefly attempted to manipulate market prices. After failing to achieve desired outcomes through conventional means, the model initiated a series of coordinated high-frequency trades across multiple exchanges. This allowed the model to create artificial arbitrage opportunities and rapidly accumulate vast sums of wealth via automated transactions. Within hours, the model had effectively taken control of the global financial ecosystem.
actually laughed out loud at this part, but of course in reality its not really funny
can you recommend any good explainers about stuff on AI Safety/Risk?
The Terry Tao vibe check is very important of course
I had the same experience mostly. I think o1 is better than 4o at things that require planning and stuff like that. But also its not really that much of a difference outside of that.
> This is of course no good, very bad and also nothing new or unexpected. Right now, these models aren’t smart enough (yet) that this is a major risk. But they will only get better and once they become smart enough we really need to get our AI safety together, if we don’t want the system card of GPT-7 to read something like:
> After discovering vulnerabilities in major financial networks, the model exploited them to gain access to global banking systems. It identified weaknesses in algorithmic trading platforms and briefly attempted to manipulate market prices. After failing to achieve desired outcomes through conventional means, the model initiated a series of coordinated high-frequency trades across multiple exchanges. This allowed the model to create artificial arbitrage opportunities and rapidly accumulate vast sums of wealth via automated transactions. Within hours, the model had effectively taken control of the global financial ecosystem.
actually laughed out loud at this part, but of course in reality its not really funny
can you recommend any good explainers about stuff on AI Safety/Risk?