News

News 2019-11-13T10:36:51+00:00
2710, 2015

Acoustic Holograms that Levitate Particles

By | October 27th, 2015|Categories: Uncategorized|0 Comments

big_lev_pr

Researcher Asier Marzo, affiliated to BIG, is the main author in the recent paper published in Nature Communications “Holographic Acoustic Elements for Manipulation of Levitated Particles”. The research is a collaboration between Bristol University (BIG and uNDT groups), Sussex University (Interact Lab), Ultrahaptics and the Public University of Navarre (TAIPECO group).

In the paper, a method to create acoustic holograms with a phased-array of ultrasonic transducers is presented. These holograms are tridimensional acoustic fields that can be emitted even from a flat surface. Unless the conventional light-holograms, the acoustic holograms cannot be seen but they exert considerable forces on physical objects and can pass through water and human tissue. This enables the creation of tractor beams, tangible displays of levitated pixels or the manipulation of particles inside the human body.

Three holograms were found to be optimum for levitation. The first is an acoustic field that resembles a pair of fingers that pinch the particle. The second is an acoustic tornado that drags the objects to its eye. And the third could be described as a high-intensity cage that surrounds the objects from all directions.

An ultrasonic phased-array is composed of several loudspeakers denominated transducers. Each transducer plays a sinusoidal wave of the same frequency and amplitude but with slightly different offsets (phase-delays). The waves are emitted from a two-dimensional surface yet their interference patterns creates a tri-dimensional shape above.

A canon is a musical composition in which the same melody is played by several instruments but starting at different times. The composition is carefully engineered to create beautiful harmonies at every instant that result from the combination of the same melody played at different points. Similarly, our computer algorithm calculates the phase-delays for each transducer so that the listener, the particle in our case, gets surrounded by the desired acoustic levels.

Video 1
Video 2
Video 3

The authors of the paper are Asier Marzo, Sue Ann Seah, Bruce W. Drinkwater, Deepak Ranjan Sahoo, Benjamin Long and Sriram Subramanian.

2610, 2015

GHOST project at the European commission ICT 2015 conference

By | October 26th, 2015|Categories: Uncategorized|0 Comments

GHOST member, BIG researcher Themis Omirou was part of the team presenting the GHOST project in ICT 2015. The team was composed of 4 other members, Faisal Taher ( University of Lancaster ),  Josje Wijnen( Eindhoven University of Technology ), Brandon Yeup Hur ( Eindhoven University of Technology ), and John Tiab ( University of Copenhagen)

The project was listed as one of the top 10 must see projects at the exhibition out of 150 projects.

ICT 2015

 

 

 

812, 2014

1st Prize award for “Yo” in the SPHERE Dress-Sense competition

By | December 8th, 2014|Categories: Uncategorized|0 Comments

Team Members

BIG researcher Themis Omirou recently participated as part of a team that won the 1st prize award in the SPHERE Dress-Sense competition. The winning project was called “Yo – a system to support and enable users to self-manage symptoms of mental illness and modify behaviours, during and beyond a course of Cognitive Behavioural Therapy”. Yo devices promote user-awareness to break their negative thought cycles and behaviour patterns, resulting in a positive change of mood. The concept storyboard illustrated how the Yo devices encourage self-reflection, human interaction and incremental changes to the user’s daily activities.

Yo is comprised by the Yo-band and the Yo-bot. The Yo-band continuously collects data about a person’s daily activity, recording when they are still and when they are moving. The wearer of the Yo-band can also tap the band when they experience a negative thought. The number of button taps and the activity levels are relayed via Bluetooth to the Yo-bot when in range. The data is then combined and reflected in the well-being of the Yo-bot via a graphic image of a sunrise. In effect the Yo-bot mirrors the well-being of the individual and, depending on their state, it will then suggest activities for them.

The team was composed of 7 team members : Themis Omirou (BIG), Annie Lywood (UWE), Antonis Vafeas (UOB), Egho Ireo (Bath), Michal Kozlowski (UOB) , Kimberly Higgins and Olivia Tiley (Red Maids’ School). The 5000 pound award was given by the Mayor of Bristol, George Ferguson, in a ceremony that took place in Watershed.

 

device

412, 2014

3D Haptic Shapes at SIGGRAPH Asia

By | December 4th, 2014|Categories: Uncategorized|0 Comments

pub_shotV3
 
Dr Benjamin Long presented a paper on creating 3D haptic shapes that can be felt in mid-air today at SIGGRAPH Asia. The paper will be published in ACM Transactions on Graphics.
 
The method uses ultrasound, which is focussed onto hands above the device and that can be felt. By focussing complex patterns of ultrasound, the air disturbances can be seen as floating 3D shapes. Visually, the ultrasound patterns have been demonstrated by directing the device at a thin layer of oil so that the depressions in the surface can be seen as spots when lit by a lamp.
 
Dr Ben Long, Research Assistant from the Bristol Interaction and Graphics (BIG) group in the Department of Computer Science, said: “Touchable holograms, immersive virtual reality that you can feel and complex touchable controls in free space, are all possible ways of using this system.
 
“In the future, people could feel holograms of objects that would not otherwise be touchable, such as feeling the differences between materials in a CT scan or understanding the shapes of artefacts in a museum.”

Youtube video

Link to paper

2010, 2014

VideoHandles receives honorable mention at SUI 2014

By | October 20th, 2014|Categories: Uncategorized|0 Comments

PaperBannerMain
YouTube: VideoHandles

Jarrod Knibbe, Sue Ann Seah and Mike Fraser presented their work on VideoHandles at SUI 2014 and received an honorable mention for best short paper.

VideoHandles is a novel interaction technique for searching through action-camera (e.g. GoPro) video collections. Action-cameras are designed to be mounted, switched on and then ignored as they record the entirety of the wearer’s chosen activity. This results in a large amount of footage that may only include a small number of interesting moments. When reviewing the footage later, these interesting moments are hard to locate.

VideoHandles presents a novel solution to this problem, enabling the wearer to search through the footage by replaying actions they performed during the initial capture. For example, a diver goes for a long dive and communicates with their buddy throughout using a series of hand gestures. On one occasion, the diver sees a puffer fish and gestures ‘puffer fish’ to their buddy (a fish swimming motion, followed by a mimicked inflation) so that they can see it too. Later, using VideoHandles, the diver repeats the puffer fish gesture in front of the camera in order to locate that exact moment in their footage. Alongside returning the puffer fish footage, VideoHandles also returns all other moments including the ‘fish swimming’ gesture – a key component of all fish gestures in scuba diving.

The VideoHandles work explored action-camera use across a large collection of footage and presented a number of interaction styles and usage scenarios, both enthusiast and professional, including biking, windsurfing and archaeological excavation.