This would be the first post from the new company website and quite frankly this post was long overdue, I finally found the time to write about our new app i.e. Your Memories – I was there. It came to be when I was working on ‘Feature 21’ for My Day To-Do i.e. a by product of our work on that feature. Yes, Feature 21 uses Image processing and Artificial Intelligence (AI) and during development, I just had a thought “hhmm let’s take some of the code, specialise it and spin it off (make it) into a single feature app”. In this post I will be talking about what the app was, the problem it solves and some of the use cases of where it can be applied to.
Background to single feature apps
Everytime I am trying to evaluate a new feature to add to My Day To-Do, I am usually exploring a new iOS API and my typical process is that I create a new Xcode project and work with that new API by practicing it in a dummy app. I usually end up taking that dummy app a bit too far i.e. very close to a complete app and I have done that one too many times in the last few years. For almost all of last year I had been thinking, why not just polish one those near complete dummy projects and release them on the App Store?
Great with that background set, these last few months I have playing around with some machine learning and image processing APIs on iOS devices. This time I was determined that I will spin it off into a single feature app and so I did. After nearly a year since having the thought of releasing a single feature, I finally managed to release one to the App Store that app is Your Memories – I was there
Your one stop photo solution
The premise of I was there is really simple, you take a picture and the app tries to identify/detect what’s in the picture i.e. it has a brain that it uses to identify what’s in your image. The idea is, as you go about your day you capture anything you see that’s memorable and make a memory. Then you can share that via a message or on social media and when you do, the app would add a watermark on the image with the relevant info. What it does differently to the other 10,000+ photo apps out there is that it gives you a head start on your memory categorisation or naming process by predicting what’s in the image.
The problem it solves
One of the problems that this app solves is that it assists you when you want to let someone know what you did the evening or the weekend before. Say you want to tell someone about your trip to the zoo via a series of images you took there, you would send them a message with the images and write a description of what’s in the images i.e. a two step process. However with I was there, it’s a one step process, because the pictures speak for themselves. As in they literally speak for them selves, they would have details of what’s in the image, the location and the exact date and time. Here’s an example,
On 3rd August, at 8:02pm, I was at Great Western Highway, Sydney (Australia) and I had shezwan dosa. If someone asks me what did I have for dinner on my birthday evening? All I need to do is send them this picture, it will give them the necessary information about my birthday dinner meal. Obviously with close friends and family you would want to share more details but for social media, this much info is really more than enough (If you think this is too much info then comment let me know what you think can be taken out).
Now obviously the app AI didn’t correctly identify the image as shezwan dosa with, it thought it was a burrito, so I changed it to it’s right name. Shezwan is an Indian dish, that I think originated in the western part of India and this image shows a dish that’s a combination of a south Indian dish (dosa) and what the Indians know as Chinese food i.e. shezwan. As it is I was there struggles to identify a lot of things so I can’t realistically expect it to correctly identify specialised Indian cuisine.
p.s. I reckon there’s nothing Chinese about any Shezwan dishes in India, they are just another way of Indian style cooking and it’s all hot (chilli) and spicy food.
Our target demographic for this app is mostly people who use Instagram , a lot or in terms of age-groups, people in the age group 16-30. If they use this app, they don’t have to type nearly as much and they can just share them pictures on their favourite social media and they won’t have to type nearly as much. That’s the concept, now let’s examine a few real-world uses cases for this app,
Case 1: Barry and his friends
Barry is out having a drink with his girlfriend Iris and two other friends at O’Niels pub in 340 George St, and at some point his friends Clarke and Bruce message him asking him what’s he upto and where he is? Now instead of typing, ‘having a drink at O’Niels on George St’, Barry could just take a picture of his beer glass with I was there and send that to Bruce and Clarke and they would know where he is and what he’s doing.
Case 2: Evan’s day trip with his son
Evan’s parents live overseas and as such they don’t see their grandson as much as they would like to. Therefore Evan makes a point of sending them as many pictures of his son as he can so his parents feel a little closer to their grandson.
Last weekend, he was taking his son on a day trip to different from places across Sydney starting from Palm beach in the North and ending with Cronulla Beach in the South. It is expected that he would take 40 or more pictures in this trip, now if he would just send these pictures to his parents, he would have to explain where and when they are in the picture or what they were doing. This would be an incredibly tedious process for him, however if he uses I was there on his iPhone to take and share the pictures, they would have all that info on them i.e. they are self-explanatory and his parents would get all that info without asking him to explain every single picture.
Case 3: My trip to Oz Comic Con
Comic con can be fun and being a long time fan of sci-fi TV shows and comics there’s just so much for me to see and do. This too would be a day trip and it’s full of cosplay (ers), comics to see and guests to go watch. Comic con has it’s Itinerary for each day printed out and given to you but what it doesn’t have is details about all the miscellaneous activities happening across the floor i.e. cosplay etc. The last time I went to Comic Con, I took about 70 pictures which include not just of the guests but also those with awesome cosplay (ers). I mean I saw two blokes dressed as Jay and Silent Bob and OMG!!! I had to take a picture. After Comic Con, I would share all the pictures with my best friend who’s overseas when comic con happens, hence if I send him pictures he would like to know the timeline of events i.e. what time did I see the stormtropper Cosplay, where did I see William Shatner talk, how many comic books were at the stall the time I got there etc etc. Once again if I use I was there the images would be fairly self-explanatory and it will save me the time to explain every single picture to my best friend.
Initially I wanted to write everything in just one post but as I started writing I realised that the content to be written is more than I initially thought. Hence going by my understanding of separation of concerns approach, I figured I would find a logical stopping point and split this into two posts (2 parts).
In this post we talked a little bit about our understanding of single feature apps, our motivation to create one, an overview of what we made (I was there), mention our target audience and examine some of the use cases for it would work best. In Part 2 of this,
- we will go into some technical details of the computer vision and AI components of this app
- some of the iOS problems that we had and how Cocoapods saved the day
- and lastly, the marketing effort that’s gone into this app so far e.g. was the Fiverr gig worth it?
Anyway that wraps this post and until next time, remember I am working on my startup full-time now so if you find any of my posts useful and want to support me, buy or even try one of our products and leave us a review on the app store.