top of page

Magic Mirror

Internal Project @ Fake Love

Magic Mirror - also Oubliette, is a magic box that comes alive when you put objects inside and takes you back in time.

The word “oublier”, as originated in French, means “forget”. Here we picked a set of objects that goes back in decade and date back to as early as 50s: iPod, walkman,CD, 8track and radio. As you put one of those object into the box, it shows images related to that era on fashion, celebrity, game, music, culture etc. The images are pulled real time online as a reflection of the evolving landscape of the Internet.

Process

Magic Mirror — also Oubliette, is a magic box that comes alive when you put objects inside and takes you back in time.

The word “oublier”, as originated in French, means “forget”. Here we picked a set of objects that goes back in decade and date back to as early as 50s: iPod, walkman,CD, 8track and radio. As you put one of those object into the box, it shows images related to that era on fashion, celebrity, game, music, culture etc. The images are pulled real time online as a reflection of the evolving landscape of the Internet.

teaser.gif

Ideation and Process

We have a transparent screen sitting on the office window doing some graphics dance everyday. After staring at it for a few months, I’m very tempted to make sth out of it by myself.

The very first and rough idea about this is to add graphics to things in the real life, which is a bit like augmented reality. I also want the computer to know what it’s looking at to be smarter on what graphics it shows, so I tried to combine it with Yolo(real-time object detection) which I was playing with at that time.

inspiration.png

- Transparent LCD: it’s blocking when it displays black and revealing when it shows white.

0_EFOn3179hmmPBO0A.gif

- Yolo comes with a set of pre-trained model that recognizes as many as 9000 objects and it’s blazing fast. As I was pondering on what object I want I did a lot of test on its capability. One of them is pointing the camera directly at the Amazon page to see the result. Unfortunately a lot of times it only shows very abstract and broad word like “instrumentality” for object it doesn’t quite know. So after feeling frustrated for a few days I decided to start my own training for custom objects.

0_jUyGchbf2ZuHJyIu.jpeg

Concept

The concept of this project sort of evolved around the technology of real-time object detection — we want a collection of objects that tells one whole story together. After several rounds of brainstorming and reviews, we settled on music objects (iPod, walkman, CD, 8track, radio) that go back in time and recall people’s memory of their childhood. This project is in honour of the cutting-edge technologies in old time.

Technology

Details on technology can be referred to this github repo.

For my own training process, I took 80–100 pictures for each object, ideally on different backgrounds, from different angles, so the camera recognizes the object whatsoever (in this specific case objects will only be placed on white backgrounds so less pictures will also work, I haven’t tested the limit though). Then you need to manually label the images to tell the program which part contains the object you want — convert it to a Yolo-specific format — modify the configuration file based on the number of objects — and then finally you could feed it to GPU! The Yolo repo has a lot of details on when to stop training because the error would actually goes up if you train too much, it’s a fun read.

Nowadays things are moving really fast and by the time I’m writing up this post Yolo v3 has come out and there are even painless option like CoreML and TuriCreate. You could scrape 40 images from Google and run it through a python file, and the model is there waiting to be loaded. I can’t guarantee that a more painful training process and more understanding towards neural network yield to flexibility and control, after all it really depends on the needs of the project and how deep you want to dig into neural network =)

Fabrication

Look and feel

0_eS5bCj6Q4wzIhjMu.png
0_4_s_y8aVnBHXsyUR.png
fab_sketch1.jpg
fab_sketch2.jpg
0_a8Mre7v4Q7pCUd6V.gif

The fabricator took my webcam and ripped it apart :/

0_E-tY_kN0Zb1XxyQD.jpeg
header.png

Thoughts, Next Steps

The next step is also the very first thought for this project, that is, pick any random object by your hand and the detection could work. Of course this won’t fit in our current concept for now, there’s also more to think about what the box should do in reaction of a random project — but technology wise it’s fascinating.

In terms of training there’s also bonus points if you can try to merge your own set of objects with a pre-trained model, by modifying the configuration file to fine-tune the neural network. By theory this would work but I still haven’t wrapped my head around this.

So far I’m pretty happy with the beautiful polished wooden box sitting in our office as a time tunnel. It’s a lot of coding exercise for me, not only on object training but also on backend server, frontend Javascript animations, and networking between different programs. Writing this post as a wrap up for my project and moving on to the next one =)

bottom of page