2012-10-30

Tangible Status Update

Sifteos are integrated as found in the release. Some minor fixes have been applied since. Apart from that we have started to integrate with the projector group and the plan is to show the AR codes for controlling size of workspace.


We are now able to control the Spheros from the web browser as previously shown. Since then we have integrated this with other parts of the application and are now to some extent able to notify the user about a incoming call. The user is even able to answer a call by pushing away the ball.

Until the next release we will refine how the interactions with the user will occur. A part of this is to fully integrate with the rest of the application. Most of it however is to make the driver more usable and not require so much manual interaction for the pairing with the Sphero. This will also include writing installation and setup instructions for the driver.

2012-10-27

Release 1 aka "Arnold"

It's finally time for our first release.

To be able to try it out, you first have to properly configure your computer. To be able to use the tangible items, you have to install some applications. For this release you will be limited to Sifteo support. For the shared workspace you will also need a projector and web camera.

To simply use the video conferencing capabilities, follow these instructions:
  1. Use Google Chrome.
  2. Open chrome://flags/ and enable PeerConnection.
  3. Make sure your webcam is functioning (e.g. by testing it at this site).
You may now use the website without Sifteo support. Sifteos have only been tested on Windows 7. They are also available for OS X but this has not been tested.

To use the Sifteos:
  1. Install Siftdev. This is a developer version of the normal Siftrunner application.
  2. Make sure Java is installed.
  3. Install Mono.
  4. Download and extract Tangibles-Release1.zip.
  5. Open run_TangibleAPI.bat. If it's working you will see "TANGIBLE_API_READY" at the bottom of the console window.
  6. Open run_SiftDriver.bat. If your Mono version is not 2.10.9, then you first have to edit run_SiftDriver.bat and update the path.
  7. Connect at least three Sifteos in Siftdev.
  8. Load app in Siftdev. Menu: Developer -> Load Apps -> select Sifteo directory.
  9. Press Play.
  10. The Sifteos should now work. Enjoy.
To enable shared workspace:
  1. Set up a projector and web camera, preferably in the roof, pointing down on your table.
  2. (Optional) Print out or draw Left marker and Right marker on two separate papers. These will be used to move(Left) and resize(Right) the share workspace by moving them around on the table in front of your camera.
  3. Press enter or press with the mouse anywhere in the browser to finish the workspace configuration
  4. Make sure to select the proper webcam when opening the workspace view.
This should now be working but you may have to adjust the distance of your projector and camera to get the optimal settings. We will show you how to do this at a later time.

The release is live at tangible.se. We hope to see you there.

2012-10-24

Local Server

As a developer we want to be able to test new functionality easily on our local machine before deploying it to the server. By being able to test the server locally we lower the risk of destroying something for someone else. We have had some problems when installing the server locally because it has been quite hard to do. The main problem has been the node.js package dependencies which have changed over time.
We have made some changes to how we install the server to make it easier for everyone in the project to test their code locally (https://npmjs.org/doc/json.html).


2012-10-22

API

The web socket message API have over the last week been fleshed out from it's earlier skeleton. Both in how the front end will connect to the back end and how the back end will push messages (events) to the front end. An example:
A user enters the lobby and enters a screen name. Through a web socket the name is send to the server which in turn tells the other clients that a new user has connected. The same goes for when a client leaves the system. Through this design the other connected clients are only receiving a small package containing this information while only the connecting client receives the whole list of users currently connected.

Some of the other features that are currently implemented are calling and receiving calls, which we intend to show in a video next week when it is integrated with tangible devices.

2012-10-19

Tangiblies - Sphero

In the Sphero group we have now accomplish to read both accelerometer and gyro data  Also the data is sent to the gateway which forward it to the webbrowser. There is currently a limitation that both cannot be measured simultaneously

We have built a testpage to visualize the sensor data which is shown in the videos below.







2012-10-18

Weekly update

The WebRTC group of the project have started working on the client - server communication. Some of the basic functionality has been finished and the goal is to complete all the basic functions within the coming week. Other things that has been worked on is the integration of the different views (lobby/room), making the system easier to use and to integrate with tangibles and projector. The following week the usability will be improved, in room view as well as in the lobby.

The projector group have almost finished on the calibration between camera and projector, some tuning remains. Related to this have been touch buttons ('press' part of the surface to trigger an action), that have been finished but need to be integrated with the rest of the application. The projector group will start adding communication between workspaces by using methods provided by the WebRTC group.

For the sphero part of the tangibles we have been able to make it spin from the web browser. However this will only work once, and after that spin the sphero will become unresponsive. This will be one of the sphero working areas for the week, the other is to report events all the way to the web browser, right now the we have no events even in the sphero driver. The work here is to convert the sensor data to events and the send them through the gateway and to the browser.
The sifteo part of the project have been focusing on simulating what will happen on a room invite and also when the user entered a room. Focus for the following week will be to fully integrate this with the server and be able to listen to events from the server and respond to them in a nice way. When this have been done more testing will be done.

Apart from the group specific stuff all groups will revisit their user stories to make them fit better with small changes in the project and integrate better with each other.

2012-10-16

Coding guidelines

It is time to start merging the codes from the different groups. Unfortunately we have all used different techniques and no file structure. We have now agreed upon some standard that we are going to use in the final product.

First of, the file structure layout:
/
    js/
         lib/  -- here goes external libraries that we haven't coded ourselves
         -- here goes our own javascript code
    css/ -- here goes all css files
    img/ -- here we have all static images
    -- here goes html files

It is a very simple layout, it might expand if needed.

Since the different groups have used different coding styles the common ground that we agreed on is trying to minimize global variables and functions and use prototype to create objects. Each object or set of objects is stored in its own file. This way it is a lot easier to get a grip of the system, minimize coupling, easier maintainability etc.

A short simplified example of the system using prototype. We will use these styles when creating classes and their methods. This content will be written in a file called ClassName.js

function ClassName(varA, varB) {
this.a_ = varA; // Private
this.b = varB; // Public

/* One way of creating functions */
this.getB = function() {
return this.b;
}
}

/* Another way of creating functions */
ClassName.prototype.getA = function() {
return a_;
};


We also agreed on some naming convention. Just some basic stuff. Each class have to start with a capital letter. All variables and functions start on a lower case letter. If a name is a composition of multiple names it may look something like this: yourName. Not underscore like your_name. Constants are named with capital letters and underscores, eg. YOUR_CONSTANT.

For indentation a tab character is used. NOT spaces.

In this way we hope to get more readable code and a structure that gives a good overview and easier maintenance.

jQuery is used for manipulating the html code, hide divs etc.

2012-10-15

A lobby?

We've made some progress on the main page. This includes a basic API for websocket-communication and an interface for the lobby page.

Lobby interface (we know its a lot of white space).

The main idea of the lobby is to display a list of available rooms and their participants. You also have other features like, changing your own name and creating a new room. At the bottom of the figure there is a pending invite or a "call", which you might accept or decline.

Our (the WebRTC group) plan for this week is to complete most of the "call" functionality and further integrate with the tangibles.

2012-10-12

Projector calibration up and running

As you know if you're following this blog, we in the projector group have finally started working with an actual projector. We've only been able to test it with the projector aimed at a wall (since we haven't gotten it mounted yet), but it's still been very useful.

The most important discovery we made was that our initial idea to calibrate the video stream by finding a colored rectangle doesn't work as well as we'd hoped: the hue of the rectangle in the browser window is radically different from how the camera sees the projected image; red becomes purple and yellow becomes green. We have a few ideas of how we could tackle this problem, but we decided to simply change method instead.

We tweaked our existing code to only use AR markers for the calibration. This is a method we've had in mind for a while (kind of a plan B), and it turned out to be a better solution: With our first method, the program could find several colored rectangles with no way off knowing which was the right one. An AR marker represents a number between 0 and 1023, removing all risks of ambiguity. Hopefully, we'll still find some use for our color filtering code, since it was pretty cool : )

In conclusion, I think you could say that we've taken one step back and two big steps forward. We'll soon post info on our plans for the coming weeks, together with a few storyboards describing our new method.

2012-10-11

Sprint 2 complete

Hello readers.

We've just finished our second sprint, and I think it's time for a status update.

As far as getting stuff up and running (which was the goal for sprint 1), I think we can say we got the last tech up and running this week, namely the Spheros. We also fixed a couple of bugs in the tangible API that we're using. You see, the tangible objects (Sifteos and Spheros), are connected to the web browser through a gateway server which has been developed at LTU before. We connect to the gateway server via a RESTful API and web sockets. The gateway server then in turn handles the communication with the devices. This is a pretty neat way of controlling stuff from the browser, and avoids a lot of trouble one would otherwise encounter if attempting to do the communication via a plugin.

The projector group has also been able to verify that their image analysis code works when using a real projector. Some lessons were learned, e.g. that projecting the color red does not mean that a perfect red color will be displayed, depending on how good the projector is. It is good that we're learning these lessons this early in the project.

The WebRTC group is working to finalize the workflow on the website. It's important to make the website easy to use and intuitive. We also need to come up with a clean design to make the whole experience great.

During the sprint to come, we will mainly code features and try to make everything as stable as possible. Maybe we'll show you some exciting sneak peeks too. =)

2012-10-10

A sphero update

In the tangible group, we have now been able to establish connection between the sphero driver and the web site through the gateway. In Figure 1, the sphero is green, that means the connection is established with the web site. After this stage we can use the different command buttons. One click on the "set color to blue" button and the sphero goes blue, as seen in Figure 2.

Figure 1 - Connected.

Figure 2 - Successful color change!

The command to make a sphero change color is fully functioning in our system. This, however, is as far as the sphero driver by default was supporting in terms of different commands. The next step for us (Samuel and Alexandra) will therefore be to implement other commands in the driver (commands that the Sphero API supports), such as the SetSpinLeft and SetSpinRight commands. Each new command will from now on require additional code in both the driver, the gateway and to the web page.

Figure 3. A simple overview of the tangible communication. Both the sphero driver and the driver for the Sifteo Cubes communicate with a gateway. The gateway handles all the communication to and from the website. ('Hemsida' = Swedish for website ;-)

2012-10-05

Live rectangle tracking and image transformation (Video)

It's time for a new video from the projector group! If you've read our previous posts, you know that we intend to use colored rectangles to find a mapping between what comes out of the projector and what the camera captures. This video demonstrates how we can detect distorted rectangles in different colors and map their contents to a rectangular canvas.


As you can see, the transformed image jumps around a bit, but keep in mind that both the camera and the projector will be statically mounted in the ceiling (or something), so we won't need to do live calibration in the finished product.


WebRTC group status update

This week we've started using Node.js instead of Google App Engine. The reason for our decision is that it will give us more freedom to use different platforms and libraries. We also stumbled upon some problems with App Engine that was already solved in Node. This led to us having to redo some of the things we had already done.

Additionally we have been working on an API for communication between the server and the clients using websockets.

While working with our prototype we've discussed several user scenarios. Most of these are related to the different ways a conversation can start.
The first thing a user does is to enter his name (or log in) in order to reach the lobby. The lobby view is loosely defined so far, but will include a list of available rooms and a list of the currently connected users.
At this point the user can choose to enter an existing room, create a new room or select another user to open a new room and invite the other user to it (similar to a normal phone call). 
When invited, a notification appears in the lobby window as well as on a Sifteo cube or Sphero (if one of them is connected). The user can then choose to accept or dismiss the invitation.
When in a room the user can choose to invite other users from a list. He can see the other users in the same room and can choose to open the workspace view (discussed in a previous blog post). 

User stories from the tangible group

An update from the tangible group, we are currently working on testing some of the commands on the sphero from the desktop API and the Sphero Driver. We now manage to make the sphero blink in different colors and spinn, just with the code running on our local computers. The next step for us is making a connection between the spheros and the gateway java server. The goal is make the sphero blink in a certain color when somethings happens, a call maybe or a message is received from a group/friend. 

The other part of the tangible group is working on the connection between the sifteo cubes and the gateway java server. 

All this work is divided into user stories that are going to be implemented as the times goes. They are presented in no particular order and list may be extended in the future.

  • As a user, I want to answer a call from the web interface
  • As a user, I want the sifteos to show whos calling
  • As a user, I want to interact with sifteos, using them for control of app
  • As a user, I want to display images on sifteos
  • As a user, I want to answer a call by clicking a sifteo
  • As a user, I want to be able to build a conversation group by using sifteos
  • As a user, I want to be able to get feedback/messages with the sphero's colors
  • As a developer, I want to modify the sphero driver so that it suits our needs
  • As a developer, I want to understand the API Gateway so that we can build a stable product
  • As a developer, I need a nice library for javascript interaction with gateway
  • As a developer, I want to test the tangibles together with the rest of the system
  • As a developer, I want unittests for made code so that I can see if I break something
This is all from us now! See you soon!

2012-10-04

User stories from the projector group

Hello, time for an update over how the shared workspace is going to work!

I am going to describe the whole process from connecting with webrtc to using the workspace.

When connecting to the "room" or your friend or whatever you want to call it you need to setup the workspace that are going to be shared. A projector and camera is mounted on the roof looking down on your table. Before you can see your friends' workspaces you need to decide how much of your own table is going to be part of the shared workspace. All participator need to have the same dimensions of the workspace to be able to merge them. The first user can draw a rectangle and thus decide the dimensions of the workspace, the other users must then draw a rectangle with the same dimensions. From the beginning the dimensions is going to be fixed and the way you resize the workspace is with a QR-code or something else that are easy to detect. When you have drawn your workspace the other participators workspaces is merged and projected on your workspace. You can now start drawing and all the participators will see everything.

Now when we are up and running you might want to be able to disable the projection from one or more of the other participators. To be able to this you will see each participators workspace in small windows beside your own workspaces. Much like layers in photoshop. These windows will work as buttons and thus you can disable/enable others workspaces with a single touch. There will also be such buttons for other things like save the current workspace to an image. This image can later be projected and continued worked on for example.

See our other blog posts for some images and videos how it is going and how it is supposed to work.

All this work is divided into user stories that are going to be implemented as the time goes. The user stories is in no particular order and the list may be extended when ideas arise.
  • As a user I want to be able to resize workspace window
  • As a user I want to be able to do all preparations when connecting
  • As a user I want to be able to decide the proportions of the workspace
  • As a user I want buttons
  • As a user I want to enable/disable workspaces
  • As a user I want to be able to save the workspace
  • As a user I want to be able to project a saved workspace
  • As a developer I want to merge video streams
  • As a developer I want to be able to map coordinates
  • As a developer I want to transform the image
  • As a developer I want to be able to detect hand movements to see if a button is pushed
  • As a developer I want to be able to automatically detect the projected window
If you have any additional features you want to have implemented please make a comment and let us know. We really appreciate all kind of feedback. If you have any good idea how to easily implement anything in the user stories please take time and make a comment.