Just thought I'd post a new concept sketch showing what the shared workspace might look like.
You'll be able to toggle the video stream received from each user on and off by moving your hand over a projected button (or something). Also, we'll bring some design sketches/storyboards to tomorrow's meeting for discussion.
The Tangibles is a student project at Luleå University of Technology (LTU). The purpose is to investigate how WebRTC, image analysis and tangible devices can be used to simplify communication and let users share a common workspace.
2012-09-27
2012-09-26
Detecting colored shapes!
As you all know, we (the projector group) are trying to work out a way to share a physical workspace over the web using a projector and a camera. In short, we will project a rectangle on a desk, film the desk and then analyze the video to find the rectangle. Whatever the camera sees within the rectangle is streamed to the other users.
Me and Nicklas (PB!) have been working on detecting colored shapes in a live video stream. This is done in a few simple steps:
- Convert the image from RGB to HSV-space.
- Use thresholding to filter out everything but the (in this example) red parts of the image.
- Detect polygons in the filtered image with the help of JS-ArUco (http://code.google.com/p/js-aruco/)
- (upcoming) Decide which, if any, of the detected polygons is the rectangle we are looking for.
- (upcoming) Calibrate the outgoing video stream using the corners of the rectangle.
It works pretty well, check out the video demonstrating steps 1-3 in action below! The white frame on the right shows the filtered image, and the detected rectangles are marked in blue.
PS. Sorry for the lack of sound in the video, we'll make a new one if you really want to hear our voices! :)
PPS. We are also working on some artwork for the project (logo coming soon!), and some storyboards in collaboration with the other groups.
2012-09-20
Projector / Camera group update
Greetings and salutations!
Over the past week, we in the Project/Camera group have been working on breaking down our part of this project. Since we need to get acquainted with the tools we are working with, get an look into what we can (and want to) accomplish, and how time consuming the different aspects will be.
For now we've settled on a few different sub-problems, which we have divided amongst ourselves and will work on solving in the coming days.
- We need to be able to draw and find a rectangle on the video stream. This will be the area of our camera feed which we will transmit to the other participants. Jimmy & Nicklas will be working on figuring this out.
- Using the aforementioned rectangle, Elias will use it to create a mapping between what the camera "sees" and what the projector will show.
- Patrik will work on using the position of the rectangle to crop it from the original stream, so we can send only the relevant information.
- At some point we want to have multiple participants, so John will look into how the different video streams should be merged into a single shared workspace.
When discussing the idea and playing around with the technology, we thought of some additional ideas which we believe would prove to be both useful and neat.
- In a session with multiple participants we want to be able to turn on / off the feed from one, or several, of the incoming video streams. We figured we would do this by having some sort of button for each person connected, which can be used to shut streams on and off.
- It should be possible to take a snapshot of the current workspace and store it. Either for a picture / sketch, or to restore the workspace and continue working on it.
We will, however, put these ideas on hold for a while, at least until we have a working prototype.
WebRTC status
Currently the WebRTC group is divided into two smaller groups, one that have been focusing more on the front end and one that has been looking into the back end.
For the back end we have after looking over some alternatives decided to continue with Google's app engine for python, while the front end group have been working on using multiple video tags and controlling a window from within another window (to be used for controlling the workspace).
The nearest future for the WebRTC group will mainly be in creating a more usable front end and writing a new back end to suit our future needs.
For the back end we have after looking over some alternatives decided to continue with Google's app engine for python, while the front end group have been working on using multiple video tags and controlling a window from within another window (to be used for controlling the workspace).
The nearest future for the WebRTC group will mainly be in creating a more usable front end and writing a new back end to suit our future needs.
Tangible group update
Hello again. It's time for a mid-sprint update. We might have neglected to mention it on this blog, but each sprint start and end on Thursdays. We have weekly meetings every Tuesday, where we discuss what everyone has done, what we are going to do in the coming week, and if we need to coordinate between the groups.
So far everything is on track and we published a preliminary gantt chart last Tuesday. I think the chart provides a pretty good overview of how the project will progress. The chart also shows when we expect to make releases. If everything goes as planned there will be a website up and running where anyone can test the progress.
Regarding the tangible group we have made some progress. You have already seen the video where Mattias shows some progress on the Sifteo cubes. When interacting with the cubes we are using an API developed earlier by another student at the university. This API is intended for the web and makes it possible for us to interact with the cubes from JavaScript. Part of this sprint is to plan ahead, so we are investigating if this API is enough for our needs or if we have to make some additions.
There is also another interesting thing with the Sifteos. It appears the company is gearing to release a new version of the Sifteo cubes, and a new SDK. Information about this is really difficult to find, but this blog post has the best information I've found so far. It is unclear if the new SDK will be compatible with the cubes we got now.
We have had less progress with the Spheros. We have set up a meeting next Tuesday with a student who worked on them earlier and hopefully that will speed things up. It appears that since the Spheros are meant to be controlled via a smartphone, there is no official support to control them from a computer. This might mean that the most convenient way is to route commands via a smartphone, but this remains to be seen.
So far everything is on track and we published a preliminary gantt chart last Tuesday. I think the chart provides a pretty good overview of how the project will progress. The chart also shows when we expect to make releases. If everything goes as planned there will be a website up and running where anyone can test the progress.
Regarding the tangible group we have made some progress. You have already seen the video where Mattias shows some progress on the Sifteo cubes. When interacting with the cubes we are using an API developed earlier by another student at the university. This API is intended for the web and makes it possible for us to interact with the cubes from JavaScript. Part of this sprint is to plan ahead, so we are investigating if this API is enough for our needs or if we have to make some additions.
There is also another interesting thing with the Sifteos. It appears the company is gearing to release a new version of the Sifteo cubes, and a new SDK. Information about this is really difficult to find, but this blog post has the best information I've found so far. It is unclear if the new SDK will be compatible with the cubes we got now.
We have had less progress with the Spheros. We have set up a meeting next Tuesday with a student who worked on them earlier and hopefully that will speed things up. It appears that since the Spheros are meant to be controlled via a smartphone, there is no official support to control them from a computer. This might mean that the most convenient way is to route commands via a smartphone, but this remains to be seen.
2012-09-18
2012-09-14
Sifteo Demostration
We have published a short demostration on the sifteo cubes being used from the web browser.
Projector and camera group
We in the projector group (Elias, Jimmy, John, Nicklas, Patrik) have started tossing around a few ideas for our part of the project. Not much is decided yet, but this is what we know for certain:
We're going to create a system which allows you to share your physical workspace with other people over the Internet, using a camera and projector. Our first version of the program will allow two users at a time; we'll call them person A and person B.
When person A uses our system, a camera mounted above her workspace films her desk and streams the video data (or interesting parts of it) to person B. Person B has a similar setup, which sends data to person A.
Using a projector, person A's physical desk is augmented with the digital data received from person B, and vice versa. Each participant will be able to see (parts) of the other person's desk projected on their own, creating the illusion of having one shared workspace.
What we have yet to decide is exactly what information should be sent, and how it will be displayed. Our target group is mainly architects, who will be using the system to collaborate on drawings while working in different locations (of course, there are more possible uses for such a system).
In the above image, you can see a sketch of our first idea for an implementation: We thought of separating the area of a user's workspace that is being filmed from the one where received information is displayed. However, we decided against this idea, since what we're really after is giving the users the illusion of working in the same place. This means, for example, that users should be able to draw on the same piece of paper.
For now, we'll play around with different Java libraries for a while and try to find out what we can actually do. We'll post some info on our new ideas soon!
We're going to create a system which allows you to share your physical workspace with other people over the Internet, using a camera and projector. Our first version of the program will allow two users at a time; we'll call them person A and person B.
When person A uses our system, a camera mounted above her workspace films her desk and streams the video data (or interesting parts of it) to person B. Person B has a similar setup, which sends data to person A.
Using a projector, person A's physical desk is augmented with the digital data received from person B, and vice versa. Each participant will be able to see (parts) of the other person's desk projected on their own, creating the illusion of having one shared workspace.
What we have yet to decide is exactly what information should be sent, and how it will be displayed. Our target group is mainly architects, who will be using the system to collaborate on drawings while working in different locations (of course, there are more possible uses for such a system).
In the above image, you can see a sketch of our first idea for an implementation: We thought of separating the area of a user's workspace that is being filmed from the one where received information is displayed. However, we decided against this idea, since what we're really after is giving the users the illusion of working in the same place. This means, for example, that users should be able to draw on the same piece of paper.
For now, we'll play around with different Java libraries for a while and try to find out what we can actually do. We'll post some info on our new ideas soon!
State of the project
Hello again. I think it's time to make a proper introduction of the project, the members, and our goal.
The purpose of the project is to investigate how tangible devices, WebRTC and image analysis can be used to simplify communication and let users share a common workspace. The tangible devices would provide additional ways for the user to receive output and provide input to applications.
One common issue with current conventions is that annoying popups interrupt you when you are working. If you instead rethink that your monitor is a region that is dedicated to what you are currently doing, and introduce other objects - tangibles - to provide an unobtrusive way to notify you of events unrelated to what you are currently doing.
The tangible objects that we are currently playing with are Sifteo cubes and Sphero balls. One scenario for notifying the user of an incoming call could be for a Sphero ball to start changing colors. The Sifteo cubes could at the same time display information, such as a picture of who's calling.
The project is based on a proposal by a research group here at Luleå University of Technology. The proposal is available on the course homepage. The course itself is a project course spanning over a whole term (roughly until the end of this year).
We are 14 fifth year computer science students working on this. We divided ourselves into three groups, namely an image analysis group, a WebRTC group and a tangible devices group.
The purpose of the project is to investigate how tangible devices, WebRTC and image analysis can be used to simplify communication and let users share a common workspace. The tangible devices would provide additional ways for the user to receive output and provide input to applications.
One common issue with current conventions is that annoying popups interrupt you when you are working. If you instead rethink that your monitor is a region that is dedicated to what you are currently doing, and introduce other objects - tangibles - to provide an unobtrusive way to notify you of events unrelated to what you are currently doing.
The tangible objects that we are currently playing with are Sifteo cubes and Sphero balls. One scenario for notifying the user of an incoming call could be for a Sphero ball to start changing colors. The Sifteo cubes could at the same time display information, such as a picture of who's calling.
The project is based on a proposal by a research group here at Luleå University of Technology. The proposal is available on the course homepage. The course itself is a project course spanning over a whole term (roughly until the end of this year).
We are 14 fifth year computer science students working on this. We divided ourselves into three groups, namely an image analysis group, a WebRTC group and a tangible devices group.
- WebRTC group
- Jonas Brood
- Johan Andersson
- John Viklund
- Karl Öhman
- Image analysis group
- John Ek
- Patrik Burström
- Nicklas Nyström
- Jimmy Nyström
- Elias Näslund
- Tangible group
- Stefan Sundin
- Alexandra Tsampikakis
- Mattias Lundberg
- Viktor Lindgren
- Samuel Sjödin
In our first meeting, I volunteered to become project lead. Once we had divided into groups according to our interests, I decided that the first name in every group should be the leader of that group. To give everyone a chance to have some responsibility we decided to change the group leader during the duration of the project. This brings us into the planning part.
The project will use Agile software development methods. We will work with two-week sprints, which will result in 6-7 sprints before the end of the project. At the end of each sprint, the responsibility of group leader will be handed over to the next person. We are working to define the goal of each sprint, and will post this information when available.
The first sprint will mostly deal with getting stuff up and running. We've already had some success with the WebRTC and the Sifteos. More details about this in later blog posts. The first sprint will conclude on September 27th.
We have the intention to release all of our results and code to the public. Our code repository is publicly available on Github.
WebRTC design
This week the WebRTC group has worked on an initial plan for basic functionality.
First of all we want to the users to be able to create rooms, private (possibly password protected) or shared. In a room two or more people can see/listen to each other and see/edit the shared workspace (the design of the workspace is still to be determined).
Users should also have the option to select their own name.
An example how our idea would work in practice:
Karl goes to our website and selects his name "Karl". He creates a room called "Room 1".
Another user which calls himself "Jonas" enters Karl's room. Now they can see and talk to each other while sharing the same desktop workspace!
On the client side our initial plan is to have to separate browser windows, one main window to show the conversation and another window to only display the workspace. The windows will need some way to communicate (window has been closed etc.).
We will start by using Google app engine for handling our webpage and the rooms.
We'll be back...
First of all we want to the users to be able to create rooms, private (possibly password protected) or shared. In a room two or more people can see/listen to each other and see/edit the shared workspace (the design of the workspace is still to be determined).
Users should also have the option to select their own name.
An example how our idea would work in practice:
Karl goes to our website and selects his name "Karl". He creates a room called "Room 1".
Another user which calls himself "Jonas" enters Karl's room. Now they can see and talk to each other while sharing the same desktop workspace!
The example described above. |
We will start by using Google app engine for handling our webpage and the rooms.
We'll be back...
2012-09-11
Svenglish
Tack för det inlägget, Johan.
Eftersom vi har folk i Delft och andra ställen som inte kan svenska så byter vi from now on till English. Danke schön.
Here is our repository: https://github.com/stefansundin/The-Tangibles
Eftersom vi har folk i Delft och andra ställen som inte kan svenska så byter vi from now on till English. Danke schön.
Here is our repository: https://github.com/stefansundin/The-Tangibles
Första inlägget
Idag har arbetet på projektet kommit igång. Vi har planerat en del på morgonen och kommit fram till att dela upp oss i tre grupper, då vi är 14 stycken som jobbar med projektet. Det blev tre stycken grupper:
- WebRTC - Ansvarar för video/audio strömmarna mellan användarna (http://www.webrtc.org/ ). (4 st)
- Projector - Ansvarar för bildigenkänning och hårdvaran för att sätta upp en testmiljö. (5 st)
- Tangibles - Ansvarar för Spheros ( http://www.gosphero.com/) och Sifteo ( https://www.sifteo.com/) (5 st)
Innan vi bestämmer oss om vilka features som ska inkluderas i projektet har vi bestämt att de skulle vara bra att få lite mer kunskap om de olika områdena. Därför tänker vi använda de två första veckorna till att bygga små prototyper.
För att kunna hålla koll på versioner av programvaran har vi valt att använda github (https://github.com/).
Tidsrapportering sker med hjälp av google spreadsheet.
Har tagit lite bilder inne i projektrummet:
AfroJohan försöker starta en dator |
Samuel inspekterar sphero |
Sifteo cubes! |
Test av WebRTC |
Allmänt arbetande |
Projektledare Stefan |
Redan första dagen har vi en röra |
Subscribe to:
Posts (Atom)