A good friend of mine shared this Tweet with me this morning, showing off OpenAI’s new “Sora” video AI tool.

Check it out here.

This is some of the most impressive looking video AI footage I’ve seen yet, and really shows off what’s possible.

Thus far both image AI and video AI encounter some limitations, such as not understanding how physics work, and sometimes showing inconsistent visuals like objects popping into place without context.

You might say that these AI understand HOW things look, but not WHY. That lack of why sometimes leads to some strange and amusing quirks.

As video AI gets better, how do we deal with deepfakes?

Evidently the White House is already looking into this, both in general and out of concern for upcoming elections.

The FCC has even declared deepfakes impersonating public figures like the president illegal. I’m curious if they will end up saying that creating deepfakes of anyone is illegal, or if the consequences are reserved for deepfakes in politics.

The proposed solution is to start cryptographically embedding videos with a public hash key, which serves as a cypher when matched to a private hash key for any videos generated legitimately by the White House. The idea is that real videos would be verifiable through these keys, and fakes would fail that verification process because they lack the private keys — keys that theoretically only the White House know.

Share This