HackCincy commenced its second annual 24-hour hackathon earlier this month. I was a part of one of 16 teams that competed for $10,000 worth of prizes, a $5000 grand prize, $1000 honorable mention, and several $1000 track prizes. The tracks were Cryptocurrency, Non-practical, Feed the Human Spirit, and Cincinnati. Callibrity sponsored and judged the Cincinnati track - most innovative solution that benefits the Cincinnati region - and the winners built an app called The Pantry, which connected donors with local shelters in need of supplies incorporating Kroger’s ClickList.
“More people stayed overnight than participated last year” to quote one of the HackCincy organizers. HackCincy is looking to be very promising and I hope to see this hackathon continue for years to come. In the following post, I’ll be sharing my first hackathon experience: the good, the bad, and the ugly.
My team comprised of E.J. and Jimmy, and myself, all current Callibrity employees, plus Marshall, a past Callibrity employee who competed on a team at HackCincy the previous year. We were notified of the opportunity and we had enough participants to step up so that we could create our own team of four. All four of us have seldom, if ever, worked together on the same project although we have all worked for the same company. There’s a great advantage to this lack of work experience with one another, we all have different backgrounds and mindsets as well as we were more accepting of each other’s ideas.
A few days before the hackathon, our team grouped up during lunch and came up with multiple ideas for each track. I recommend doing this so your first few hours of coding aren’t spent on coming up with an idea. Our imaginations were flowing, and we had some very creative ideas, however the 24-hour time constraint put a damper on some of those ideas. We decided our best route would be to start with a project small in scope, and if time permitted, build upon it. The result of our meeting was that we would participate in the grocery and food track, which was called Feed the Human Spirit, with our project idea pertaining specifically to Kroger because Kroger was the sponsor for the track. Many Kroger shoppers continue to seek out various specialized diets and are searching for new specialty foods on a regular basis that fit their dietary needs. We would make an Android application, using the phone’s camera and augmented reality to highlight items on the shelf that were deemed ‘OK’ based on the shopper’s self-selected dietary preferences. If the shopper selected ‘peanuts’ as an allergy in the app, for example, they could hold up their camera in an aisle and see all items that did not contain peanuts circled for them to make foods they can eat easy to find without having to read every single label.
This year’s HackCincy was at Union Hall in Over-the-Rhine, Cincinnati, Ohio. The building had rustic architecture inside with modern updates. I was imagining us being in a boring office building on a single floor with fierce competition, but once I walked in the doors and saw the venue, I instantly understood that this event was something we are supposed to have fun with and enjoy. They had plenty of snacks and energy drinks, and they provided us with many meals that I will talk about later.
Saturday 11:00 am - And they’re off!
The clock hit 11:00 a.m. and we were off. Within the first 15 minutes, we were already 750 lines of code deep. I’m clearly joking, for a good bit of time we worked on installing the required software development tools and discussing how we would break up the work. There was some debate on what technology we’d use for the augmented reality. Marshall had plenty of professional Unity development experience however E.J.’s Linux laptop wouldn’t work for that. We settled on using Google’s ARCore API since it would work seamlessly with our Android application and there was a fair amount of documentation. To really make our project robust, we wanted a backend that housed the data that the Android app would have to utilize. For this backend, we decided on an Express.js server using a Mongo database.
Jimmy and I worked on the backend while E.J. and Marshall tackled the augmented reality portion. I’m going to cut to the chase and mention that Jimmy and I hit a major roadblock within the first hour of programming. We were using a boilerplate Express.js with Mongo starter project I found on Github, but nothing was working. After 2 hours of tedious debugging, trial and error, I looked back at the Github project and realized it was 5 years old. Ouch. I found another starter project, copied that down and within about 30 minutes, we had connection from Express.js to Mongo. For the next hour or two, Jimmy and I worked on adding all the endpoints the Android app would require. We even started to input some mock data into the Mongo database. Things were starting to go extremely smooth.
I can’t recollect exactly what was going on between E.J. and Marshall, but it appeared they were making linear progress and were showing us updates on the progress they were making. The first update I saw was them rendering a computerized box within the camera’s view of our real world. I was extremely impressed with the quality I saw and knew they were going to do a great job. I never had my doubts though as I’ve personally worked with a E.J. a little bit and know he’s a top-notch developer.
Jimmy and I were pretty satisfied with our backend and had it running so Marshall could start querying it for the Android app. Our mock data consisted of a x, y, and z coordinates of the objects, in an imaginary 3D plane. Basically the android app could then use those x, y, z coordinates of the objects to know exactly where to draw them - I will go into a little bit more detail later.
Saturday 6:00 pm - Learning Android Development From Scratch
Jimmy and I attempted to move onto the Android app. Jimmy has very little Android experience, and my experience could be classified as novice at best. I was struggling right from the beginning and Jimmy casually moved on to create a management UI for the Mongo database creating an Angular website. The practical use of this management system would be for regular Kroger employees to be able to set where items where on shelves utilizing a two-dimensional grid, and specify any attributes for that item such as peanut free. For example, on the middle shelf slightly to the left you have white rice, which is also peanut free. On the management system, highlight the cell that accurately estimates where the white rice is placed on the shelf, add its attribute of peanut free, and save.
In the meantime, I continued working on the Android app, adding item attribute preferences that you could save. The use case being, if you were allergic to peanuts, save in the preferences that you only want items highlighted on the shelf that are peanut free. Sounds simple, right? Wrong. We are talking about Android. I used to like programming my own little Android projects that I had unlimited amount of time and patience for, but we are dealing with a time constraint and sleep deprivation; my opinion of Android has significantly diminished. The final result for saving your preferences functionally worked well, but it was incredibly ugly. Add the frustration of this preferences piece to trying to create a navigation bar for the Android app, that’s just about where the next 10 hours of my programming time went. Brutal.
Sunday Midnight - Adding AR Image Recognition and Search Functionality
E.J. and Marshall’s next step was extending the use case for Kroger. We wanted our system to work if you had multiple shelves. The phone needs to know which shelf you are standing in front of, so it can query the Mongo database and get all the items and their attributes on that shelf. Instead of using a boring QR code, they found a way where you could take a picture of any image, to which the phone could later recognize that exact image and give an associated index with it, whether it is a random image or branded advertising. We chose a simple image for our demo with the possibility of turning the image into a new advertising revenue stream for Kroger. Example, you have a picture of a the Eiffel tower, it could be tied to shelf index 1, and a picture of a boat could be tied to shelf 2. So, using the phone camera you could scan an image and it would recognize you scanned a picture of a boat and query our Mongo database for all items and their attributes on shelf 2. Keep in mind these items have those x, y, z coordinates tied to them. From where you scan the image with your phone is your point of origin. If you scanned the image on the ground, where you’re holding your phone is (0, 0, 0). If an item on shelf 2 has coordinates of (0.5, 0.5,1), the item is 1 meter out in front of you, 0.5 meter to the left, and 0.5 meter up.
In summary of what the project does to this point: Open the app, set any of your preferences, such as peanut free. Walk up to a shelf and scan the image on the ground. Hold your phone now pointed towards the shelf. All items that are peanut free would be highlighted on your phone, using your phone’s camera. A circle would be drawn around the item, and no matter how you walked around or what angle you held your phone at, because we knew the coordinates of the item, that circle would always be drawn around it.
The next good bit of time spent between me and E.J. was him fixing any of the Android stuff I ‘hacked’ to completion, as well as me starting on “search” functionality. This use case goes beyond what our application was currently capable of, but the idea is that you could search ‘molasses’ and because we know the coordinates of molasses, at any point in the store, the app could guide you to exactly where molasses is. We got it working, and small scale was that you could search ‘Coke’, and during our demo we had a can of Coke on our shelf and the application highlighted the can of Coke.
Sunday 9:00 am - So. Much. Food.
We finished coding with about two hours to spare around 9 a.m. We all pulled an all nighter and coded pretty much non-stop until then to get our project to work. We walked around outside a few times just to take a break, and HackCincy provided us with so much food. We had full meals coming every few hours- gourmet barbeque chicken mac and cheese and buffalo chicken mac and cheese, Panera bagels and coffee, sushi, gourmet deli sandwiches, pizza, and I’m sure I’m forgetting more! We were all surprised how we just kept eating the whole time, but it probably helped fuel our energy.
Sunday 11:00 am - Final Presentations
The clock struck 11:00 a.m., 24 hours later and it was time to stop and present our projects. They limited each of the 16 teams to 3 minute presentations with 3 minute Q&A sessions. We had a ton to present within those 3 minutes, our management system for our back end, setting preferences on the app, searching on the app, showing items being highlighted in real time with your camera. We unfortunately had to rush through all of that and it really didn’t do any justice for how robust our application was. Most of our app was using your camera so there wasn’t much UI to work with, which I believe hurt us in the competition compared to some of the other teams with really impressive UIs. I’m a little disheartened because of how much functionality our app had and how much work went into it, but it was a learning experience. We tailored our app beyond “grocery and food” and tried to develop something Kroger could use in their stores.
Us not winning a prize hurts only due to my competitive nature, but there was so much more to gain out of this experience. In a sleep-deprived environment, we all worked extremely well together with zero interpersonal issues. I learned new things with Android, figured out to not use 5-year-old GitHub projects, and got to see Jimmy in his prime ripping through that Angular management system. E.J. schooled me in Android and Marshall was a major driver in using ARCore effectively. I had an unbelievable 24-hours of binge eating and caught a cold a few days later. All in all, it was a great time and I’m well prepared for next year.
To see my team demo the AR experience in our final presentation after 24 hours of straight coding with no sleep