For midterm project, My partner, Mengzhen, and I want to explore machine learning image recognition in AR.
So far in class we explored using image or object targets for adding augmentable into the space. While it is a great way to trigger changes, we want to see an experience that would react to objects without the constraint of selecting and adding a target first.
Mengzhen has experimented with an example of AR experience that would recognize a drawing of a clock and shows a working digital clock to the user.
We found the direct translation from real life to digital representation as an area that is rich exploration. Especially, if the application would recognize and react to the objects in the physical environment.
Once the object is recognized, there are multiple ways of react to it in AR experience:
Create new augmentable in the scene
Add special effects to the physical object
Use the object abstractly - passed its name to 3rd party api or fetch related data
Interaction between augmentable and real person
- voice recognition / voice command
- facial expression
- react to the changes in the physical environment
Reaction to changes in the physical enviroment