Thursday, January 24, 2013

Brainstorming: Use Cases

One of the most exciting aspects of our project so far has been finding a niche to apply this technology. 3D video conferencing is great. We get to play with fun cameras, the Microsoft Kinect, and work with emerging web technologies such as webGL and webRTC. So what are we going to do with this?

We see a lot of applications of this technology in interactive live events: concerts, stage performances, sporting events. But the point of the Mozilla Ignite competition isn't how to best offer sports events streaming over the web (however mind-blowing that would be), it is how to leverage these ultra-fast networks and come up with applications which are beneficial to the country and will affect people. What we want to do is show everyone how this gigabit/programmable networking technology can change lives.

There is currently a huge push in the U.S. Educational system to promote the education and outreach of Science, Technology, Engineering and Mathematics, dubbed STEM. Our local university, UTC, has a full time STEM Outreach Coordinator, and Chattanooga is opening a new school geared entirely towards STEM. We believe that engage3D will be a powerful educational tool for both illustrating the power of new web technologies and a highly effective means of communicating educational material.

Enter the Nautilus

A week ago, Friday, Will Barkis  one of the program managers from Mozilla Ignite, landed us a conference call with some folks from The Oceans Exploration Trust. This is a very high profile group working with NOAA, National Geographic, the Sea Research Foundation, etc. All very established groups in research and educational outreach. They travel with the Nautilus, and give 5-7 video conferences A DAY to students all across the country. Take a minute and look through their websites to understand the magnitude of the projects they are involved in. Essentially, they take HD video and laser scans of famous shipwrecks and broadcast their findings all for STEM. 

This was such an amazing call and they showed a sincere desire to be involved with a 3D telepresence app, displaying interactive content from atop one of their ships. Their response to "would this be of value to you for your educational work?" was "OF COURSE!" I could go on and on about this, but I'd rather not jinx it... So we're going to pursue this relationship, and see if we could be of value to them in the future, once our application is more mature.

And, back to reality

Anyway, what I've taken from this conversation is that there are really valuable use cases for our engage3D project, especially in educadtion. Mozilla sees the value, The Oceans Exploration Trust sees the value, so we're going to gear this tool as an app used to give lectures or tutorials where 3D information is beneficial. We just pitched back to Mozilla last night for our next round of funding, and Will had many useful suggestions on building out our use case, such as looking at stereo 3D as well as the point clouds. I am also trying to find a way to justify the purchase of a small hologram projector... I think that we're in store for a lot of fun with this project in the upcoming months.

Monday, January 14, 2013

Mozilla Ignite, Moving Forward! - Development Round #2

Family fun on Main St. Chattanooga, TN
engageCV had a very exciting month of December. After we found out that we were funded by Mozilla to continue development on our 3D video conferencing application, Andor and myself immediately made plans to get him back down to chattanooga for a hack-a-thon down at the co.lab in Chattanooga, TN.

New Team Members: David Collao & Forrest Pruitt

Knowing that we were going to need additional resources and talent, we recruited two new team members from the UTC SimCenter. David Collao, a PhD student in computational engineering, and Forrest Pruitt an assistant systems administrator for the SimCenter. David is tasked to work on camera calibration, and 3D geometry processing, while Forrest is in charge of webRTC integration and systems administration. Forrest, having a strong interest in game development and programming, wants to add skills in openGL/webGL to his resume through his engagement with our project.

Hacking...

The majority of the hacking happened the week leading up to christmas. Forrest and David were both free from their normal course load and Andor was on vacation from his regular 9-5. The group camped out at co.lab all day from wednesday until sunday, with myself joining in the evenings. We went solid through the week, all up until Christmas eve. Overall it was a great time, we have a very strong, motivated and interesting team.

What progress!?

Great thanks to George MacKerron for his related work on kinects and webGL. We were able to adapt many valuable bits of information from his work here: http://blog.mackerron.com/2012/02/03/depthcam-webkinect/

  • We dropped the compression routines and other features in order to allow faster rendering, we're currently transmitting and rendering this data around 135Mbps
  • 3D Colored Point cloud transmitted over LAN and rendered in browser with WebGL
  • WebRTC mic integration
  • Researched camera calibration and how to interface with multiple cameras
  • explored various networking topologies, unfortunately we were not able to gain access to the local GENI networking resources, they have recently changed hands from EPB to UTC and were currently off line
  • discussed potential use cases for this application and have many ideas, most focused around education
Work is moving more from research to implementation, we believe that we'll be able to start building a robust/enterprise level system here shortly.  We will focus our next efforts on research still, with the final coding sprint to our system beta ready.

Here's a short video for your viewing pleasure:


Our RGB camera is still out of sync with the point cloud, this will be resolved shortly. Also, after taking a hard look at this, I'm going to allow the user to select different rendering modes, offering a triangulation of the scene along with background subtraction.

I'm still in the process of cleaning the code, but look for the project up on github here soon.

You can now view the code at: https://github.com/bbrock25/engage3D

Next Steps

  • InstaGENI Rack integration
  • Apply calibration routines to allow for multiple cameras
  • develop new algorithms to support multiple point clouds
  • Allow visualization of triangulated surfaces out of point cloud, point cloud data, however very flexible, might not be the best medium for a video conference