C2PA’s ideal is beautiful in theory, every LLM pass logged, signed, traceable.
Some possible issues I could foresee:
LLMs can log *nothing* useful — no prompts, no temperature, no model details — and still create a technically valid C2PA manifest. The spec allows this, so the chain is “valid” but meaningless.
There’s no way to verify that Grok actually is Grok, or that it’s not pretending to be ChatGPT. Vendors can claim any model name in the manifest, and unless you trust their credentials, you can’t know if they’re lying.
Redaction lets LLMs hide prior steps — for example, if ChatGPT wants to edit content originally created by Gemini, it can redact that Gemini was ever involved. The chain shows a gap, and the consumer has no way to know what was removed.
Metadata gets stripped in transit — by social media, email, cloud storage — breaking the chain.
C2PA enables transparency but doesn’t enforce it. You’ll need governance on top.
I did a bit of searching and learned that while C2PA is used by some of the major LLM's it is in specific, limited ways--mostly for AI-generated images but none of them do it for text outputs.
I love that you're thinking about Provenance Chains for Tightknit, Lief Z..