In mid-October 2021 Facebook announced on their A.I. blog that they are participating in a project called Ego4D – an international consortium of 13 universities in partnership with Facebook A.I. that collaborated to advance egocentric perception.
The Facebook blog – ‘Teaching A.I. to perceive the world through your eyes‘ explains that ‘egocentric perception’ is the research around five benchmark challenges centred on first-person visual experience was established, including:
- Episodic memory: What happened when? (e.g., “Where did I leave my keys?”),
- Forecasting: What am I likely to do next? (e.g., “Wait, you’ve already added salt to this recipe.”),
- Hand and object manipulation: What am I doing? (e.g., “Teach me how to play the drums.”),
- Audio-visual diarization: Who said what when? (e.g., “What was the main topic during class?”),
- Social interaction: Who is interacting with whom? (e.g., “Help me better hear the person talking to me at this noisy restaurant.”)
The Facebook A.I. blog explains the outcome for this long-term project is to:
‘catalyze research on the building blocks necessary to develop smarter A.I. assistants that can understand and interact not just in the real world but also in the metaverse, where physical reality, A.R., and V.R. all come together in a single space.’
The scientific consortium, and Facebook in particular, want to ‘solve research challenges around egocentric perception’.
Or in other words, create a transhuman interface whereby an Artificial Intelligence presumably under Facebook’s watchful all-seeing eye will come to experience what it is to be human intimately; very intimately!
Nothing to worry about here.
It sounds all very altruistic and innovative; Facebook exclaims that they have already collected over 2,200 hours of ‘firsthand’ person video ‘in the wild’.
So far, so good; what could possibly go wrong?
Well, there have been few science fiction novels, movies, and T.V. shows that have pointed out a few dilemmas – like total digital surveillance and behavioural manipulation, but what would science fiction writers know, anyway!
This graphic from the Ego4D website shows they are seeking firsthand video from an extensive range of human activities with the question: What coverage of scenarios do you have?
A sample visualisation of our scenarios is below. Outer circle shows the 14 most common scenarios (70% of the data). Wordle shows scenarios in the remaining 30%.
Again, this is all accomplished under the coverall term of ‘for the betterment of humankind’.
We are teaching A.I. to understand human likes, dislikes, and just about every aspect of human existence from sexuality, political, moral, ethical, religious, and shopping preferences. And only 20 or so years ago, this was still in the S.F. and conspiracy realm.
Is this all in the name of progress or something else?
Well, considering Facebook’s past abusive user relationships, not to mention the controversy regarding the use and distribution of personal data, who will watch the world’s largest social media network?
The Facebook A.I. blog doesn’t really discuss how this data will be used, who will profit, and whether the ‘video data’ suppliers will receive any payment.
So, what are the safety mechanisms in place to stave off the inevitable scenario as described in George Orwell’s Nineteen Eighty-Four big brother always watching, always able to interpret, comprehend and interact with the humble human? When will the real motivation behind Facebook’s A.I. project be uploaded?
ajjames:mysci-fi Aug 2022