Military guy: suppose you have many sensors in the air, each can only give you a low confidence of knowledge, but all together can produce a very high level of insight, allowing you to collaboratively swarm without heavy communication.
Social Platform guy: suppose you have millions of new videos a day coming into the platform and you want to know what is going on in aggregate, both for your business strategist friends and your users.
Intelligence guy: suppose you think of your monitoring of voice and text messages as streams and you want in real time to be able to ‘connect dots’ reliably.
What we can’t do is connect the reasoning systems of vastly many sources in real time so that adaptive feature recognition, assembly and deduction can be performed.
redframerHow we will use this in redframer. There are about 50-60,000 feature films of interest to prospective redframer users. Over time, users will attach deep knowledge to these, but there are many elements that can be identified with this system.
A simple example is building an influence network of fight/chase choreography as it has evolved.
Another is identifying certain conventions of how actors modulate action. Philip Seymour Hoffman had an amazing ability to deliver a line but delay his associated facial expression almost a second so that the spoken information is in the time frame of the film but the visible information is in the time frame of the audience. Some directors (like Charlie Kaufman) know how to amplify this cinematically.
Suppose a user focused on this, assume it has a history in film and trigger the system to trace back and find the way we came to understand and value this.
The application is reachable at all the patent search engine sites, directly from us or from Google.