WTA-76: Go Ahead, Play With Your Food!
Date: Saturday, October 8, 2016
Time: 10:00 a.m. – 12:00 p.m. Pacific (1:00 p.m. – 3:00 p.m. Eastern)
Facilitator: Michael Larsen
Attendees: Michael Larsen, Anna Royzman
LoseIt is an app that allows people to focus on tracking the foods they eat, as well as their exercise and other factors to allow people to lose weight successfully. Recently, they added a new feature to the mobile app called SnapIt!, which allows people to take pictures of their food to help identify what it is and how many calories it may have. It’s an interesting idea, and it definitely puts a new spin on “machine learning”, but how does it work? How well does it work? I thought it might be fun to find out, and in the process have a discussion about testing machine learning and some of the unique challenges we might face in the process.
Needed for the session: Download LoseIt on your mobile phone or device (SnapIt may be limited to iPhone at this point).
Additional Thoughts for Your Consideration:
See Carina C. Zona’s Keynote “Consequences of an Insightful Algorithm”
YouTube Video of Talk: (https://www.youtube.com/watch?v=NheE6udjfGI)
Slides From Talk: (http://www.slideshare.net/cczona/consequences-of-an-insightful-algorithm)
Well, it looks like a hurricane hitting the East Coast took out a few of our stalwart players, but thank you Anna Royzman for coming out and participating today and being involved and engaged the whole session. We had some fun looking at the ways that the SnapIt app would make guesses as to what kind of items it was photographing. Sometimes, the suggestions were right, and sometimes they were definitely questionable. We also discovered a bug in the process where, if you tried to retake the same picture more than twice, the photo snap button would only show half the button. Fun!
More than just focusing on the app, though, we had a spirited discussion about machine learning in general and ways in which, though we tend to think of algorithms as somewhat more fair, in fact algorithms are only as fair as the humans who create, use and interpret them. Which is to say, often not very fair.
Full chat transcript can be seen here.