Using LLMs to Unredact Text

Initial results in using LLMs to unredact text based on the size of the individual-word redaction rectangles.

This feels like something that a specialized ML system could be trained on.

This article has been indexed from Schneier on Security

Read the original article: