TigerTalks on the Road returns to California

Monday, Apr 30, 2018
by Wright Seneres

As part of Princeton University’s efforts to bolster its presence on the Pacific coast, the Princeton Entrepreneurship Council was excited to bring the popular TigerTalks panel discussion series back to Silicon Valley in April.

PEC, along with Princeton Club of Northern California and Princeton Alumni Angels presented “TigerTalks on the Road: AI and Computer Vision” at the DLA Piper offices in Palo Alto. The panel featured Thomas Funkhouser, the David M. Siegel ’83 Professor in Computer Science and head of the Princeton 3D Vision Lab; Bryton Shang ’12, founder of Aquabyte, a startup applying machine learning to fish farming (and Princeton Alumni Entrepreneurs Fund portfolio company); Ajit Singh, former lecturer at Princeton and Managing Director at Artiman Ventures, which includes AI-focused companies in its white space investment portfolio; and Jeff Rosedale ’86, intellectual property attorney and Partner at Baker & Hostetler, LLP.

Funkhouser on the basis of computer vision:

It’s a very, very hot area right now, and very dynamic. Most of computer vision didn’t work at all about ten years ago. But starting around 2012, things really changed. Datasets became available that were labeled by people with what was in an image. There is a dataset called ImageNet in particular, and they used “mechanical turkers” – people around the world on the internet – to label these (millions of) images with the content of the image: “that’s a cat”, “that’s a kitchen”, “that’s something else”. That provided enough data for algorithms to automatically learn what are the key features that differentiate this kind of object from that kind of object in an image. And so algorithms based on deep learning really started to work.

The really cool thing about that is, this dataset ImageNet that changed everything…happened at Princeton! Jia Deng, the Ph.D. student who put that all together, he did that at Princeton. Very transformative.

Photo: Prof. Tom Funkhouser speaks at TigerTalks on the Road

Photo by StillNation Photography.

  

Funkhouser on the work he does at Princeton in the 3D Vision Lab:

The computer vision I work in is with devices that have a camera, operating in the real world and trying to understand the world around it. So that would be like a self-driving car would have cameras that need to understand where the pedestrians are, and the road, and everything else. They’re operating in the real world and having to make real-time decisions about: what is the three-dimensional structure of the world around me, what are the people, what are they doing, what are the objects, what can I manipulate.  

We try to understand what all the objects are that are visible within the camera. These pixels are a chair, these pixels are a door, so on and so forth. But we would like a much deeper understanding of that. It’s one thing to just label the pixels but it’s a completely different thing to be able to understand how to interact in this scene, like navigate through it.

We use computer graphics models to generate massive training sets. I can make millions of perfectly-labeled images, and then I can train my deep learning network on those and a small amount of real data that we have. And then end up with these networks that really understand how to label the pixels according to what object they are, but also how to predict what’s behind the visible surfaces. So from my 2-D view right here, I can’t see most of you, your legs are not visible to me if you’re not in the front row. But if I’ve trained on enough data from computer graphics that have scenes like this, I should be able to learn from a priori that allows me to predict that you all have legs, and where they are.

The 3D Vision Lab and the Amazon Robotics Challenge:

One more cool project in the lab: there is something called the Amazon Robotics Challenge. Amazon has warehouses with buckets that are full of stuff. It’s cluttered; there are like 30 or 40 objects within a bucket, they’re all on top of each other. They have people that go and pick the right objects out and put them in boxes for shipping. So they would like to have robot arms do that. They had a challenge last year, many top research groups entered this challenge. The goal is to be able to successfully pick up objects and put them in the right box. It’s really, really difficult because the objects are all jumbled up and many of the objects you’ve never seen before. So we talk about having lots of training data – if you got to look at a billion examples of this kind of object then you can get pretty good at recognizing it. But the Amazon warehouse catalog has like two billion objects in it, and so you can’t train on all of them. We got to train on a hundred of them, and at test time we had to be able to handle any object you put in the bin. The short version is: we won the challenge!

Shang on Aquabyte:

Fish farming is one of these industries that’s not really prominent in the U.S. but it’s massive worldwide. It’s a $180 billion dollar industry and the fastest-growing sector of food production. It covers everything from shrimp farming in Thailand to salmon farming in Norway. So in Norway in the fjords they’re growing salmon in these massive pens; 200-foot diameter pens growing 200,000 salmon at a time. The main way that they monitor feeding, and it’s the number one cost, is by sticking these surveillance cameras in the pen, and basically people watch the footage of the fish swimming around all day and the pellets falling down, and determine how much to feed based off of that. The big idea being, if could we design a machine learning model to optimally feed, and if we could save 20-30 percent or even a few percentage points of feed, that could be a great deal. The way we thought about doing that was if we could understand the size of the fish over time, the environmental conditions, how much was fed, then we could start to figure out how much to optimally feed the fish. The way we do that is based on computer vision.

We’ve started to build out the system and we’re going to release the first product in the next three to six months. For us, once we get the first products in the water collecting data, the next step is to be able to test the feed optimization.  

Shang on the Princeton community:

While I was in Norway, I set up a bunch of pilots with farms, then came back to the U.S, and around the same time connected with AEF (Princeton Alumni Entrepreneurs Fund), which also invested in the pre-seed.

PHOTO: Bryton Shang '12 speaks at TigerTalks on the Road

Photo by StillNation Photography

It’s been really great to incorporate the Princeton community. Half our team is from Princeton and one of our board members, Amit Mukherjee, is Class of 2010. We have a number of ties and we’re really excited. We’re looking to hire three more software engineers, hopefully get some more Princeton guys onboard!

Singh on one of Artiman’s investments in AI and computer vision:

We have a company called BoxBot. It focuses on the last mile of delivery. How many packages, FedEx/UPS, actually get stolen in the U.S. every year? Someone take a guess. Eleven million packages get lost every year. Number two, if you spend a dollar in delivery, how much of that goes into the last mile? Once you left the highway and you’re in zip code 94301, stuck in traffic en route on University Avenue? How much money gets spent there? Seventy-odd cents. So 70 cents of the dollar gets spent on the last mile. This company says we want to solve that problem.

So the FedEx guy comes and puts the package in a BoxBot container in that zip code. From that point on, a text message goes to the owner saying “respond when you’re ready to receive the package at home” when you want to receive the package. At that point a little robot device will step out of the box, and all of this is enabled by computer vision. It’s a simple problem to solve; if Amazon packages can be sorted in a random bucket, this is an easy problem to solve. So the robotic device goes down University Avenue, makes a left on Cowper, right on Hamilton, arrives at your address – I’m getting to my home this way (crowd laughs) – and then step out, punch in a code, open the door, get your box, go home. Very simple AI, very simple computer vision, but it’s solving a very real problem.   

PHOTO: Ajit Singh of Artiman Ventures speaks at TigerTalks on the Road

Photo by StillNation Photography

Rosedale on intellectual property:

Patents are nothing more than protection for the solution to a problem. The question is, you’re constantly solving problems every day. Patents, as we all know, are not cheap. They are expensive. So what do you focus your patents on? What is your patent strategy?

PHOTO: Jeff Rosedale '86 speaks at TigerTalks on the Road

Photo by StillNation Photography

I was at Princeton on Friday at the TigerLaunch event, talking to the student entrepreneurs about, “what do you invest your precious funds, your seed capital, in?” You do want some control, you do want your IP. What do you invest in? You invest in that which the world is going to beat a path to your door for.

Watch the entire panel discussion and Q&A session in the archived Facebook Live stream:

View photos from the event on our Facebook page. 

PEC thanks the panelists for their time and insights; Princeton Club of Northern California and Princeton Alumni Angels for their help in bringing TigerTalks back to California; and Brad Rock ’80 at DLA Piper for his assistance in hosting TigerTalks at DLA Piper’s office in Palo Alto.