Earlier this month I made another visit to the Victoria & Albert Museum. The first time I went there last year was with a lady called Rafie Cecilia, to see how accessible it was as a visually impaired person for her PhD study. And this time I met her at the museum again, but this time it was for a focus group accessibility study organised by some of her colleagues from University College London (UCL), and there were 3 other participants as well as me. The ladies from UCL (Lydia Porter, Jessica Andrich & Nicola Flüchter) were running a few of these sessions over a couple of days, and this one had sounded very intriguing to me. Quite literally ‘sounded’ in fact, given that it involved some clever use of ultrasound!
There were 4 tables spread around the room, each with an exhibit being represented by a different type of accessible technology, along with audio description in every case. So myself and the other 3 participants went to 1 table each, then swapped around until we had each spent time at all of them. Then we had a group discussion at the end to give our thougths.
The first thing I looked at was a flat tactile model of some hieroglyphics, 3D printed I believe. I say looked at – to my eyes it just looked like a grey slab, until I put my face close to it and saw all the etchings in it. But feeling it was the idea anyway. And it was very detailed, as you would expect, so trying to pick out individual bits wasn’t always easy. As I still use my vision for a lot of things, I’m not quite as ‘tuned in’ to my fingers as many people without sight are. So the audio description definitely helped there, and was very interesting to listen to. I couldn’t always find exactly what it was referring to, but it helped me to picture things and get a good overview of what I was touching, so it served its purpose nicely.
The unique thing about this, however ,was that I wore a paper ring around my finger, that had a special RFID tag attached to it. And there were RFID tags underneath the model as well. So as I moved my finger over the model, it would trigger the audio description for the relevant part of it, so that was pretty cool. It wasn’t perfectly accurate, as occasionally I would trigger the wrong bit of audio, and it helps if you move your hand slowly as well. But in general it worked alright, and it definitely has a lot of potential. It just needs to be a bit more precise perhaps. Having a special small dot on your finger tip, for instance, ought to be pretty spot on.
The second exhibit was a 3D printed model of a scarab beetle. Not that I was entirely sure of what it was at first – from the front it looked a bit like a frog or a toad! But when I looked at the side and saw it had extra legs, it was clearly something else. Again, the audio description cleared that up. But this time, it was triggered in a very simple way – by placing it on a box. That was it. You place the object on the box, and the audio starts. If you lift it up, the audio stops. Then you can put it back to resume the audio again. Very simple and effective. It basically means you could have a variety of tactile objects on a table, along with this special box, and then place each object on the box in turn to hear more about it.
The third table didn’t involve a physical object. Instead, this was simply about using audio description on its own, with just a photo of the object, a sarcophagus, accompanying it. This was Rafie’s table, so she was already very familiar with my thoughts on audio description from our previous visits to this museum and the Museum of London. And the sarcophagus was interesting to hear about. I actually didn’t notice the photo initially, so was going purely on the audio description to build up an image in my head. And I wasn’t too far off when I did then see the photo, I had got the general idea.
The final table was the most intriguing to me though, purely because I’ve never experienced anything like it before. This was showing a technology called ultrahaptics, which uses ultrasound waves to represent the shape of objects in mid-air. It’s quite strange at first, and the technology is still being developed, but from what I saw… well, felt… here, it’s quite cool.
It basically consisted of a large square pad covered in a grid of sensors, and a camera at the top to detect where your hand is. So when you hold your hand flat over the pad and then move it around, you can see an animation of it on the computer screen. When the ultrasound is then turned on, it just feels like air being lightly blown against your palm. But it’s not air, it’s ultrasound waves. You can’t hear them, you just feel them. And there’s nothing uncomfortable about it, it’s quite pleasant really.
So by directing those ultrasound waves through the different parts of the grid, you can feel a variety of different shapes on your palm in mid-air, and you can move your hand around to feel them from different angles. So this can be a small dot, a line, a circle, or whatever. And you can have movement as well – so for one example a dot followed my hand around the pad as I moved randomly over it, and in another example I could feel the waves rotating like helicopter rotors, which was clever.
You don’t need to hold your hand close to feel it, you can rest your elbow on the table and have your hand fairly high up. It does take a bit of trial and error to find and stick to the optimal height range – too high and you can’t feel anything, too low and you’ll feel the waves but not the shape. So it takes a bit of getting used to, but I was fine with it.
Once I’d had the demonstration of how it works, it was time for the actual test. And the object I would be feeling in this way was a bracelet, again with audio description. So I could feel the round shape created by the ultrasound waves in the air, including the hole in the centre like a bracelet obviously has. And I could move my hand to feel it from above and around the sides, which was nice, giving it that 3D feel.
It’s not high definition, so I couldn’t feel any detail other than that – it was just the general shape and size of it, in an air-like form. It’s hard to wrap your head around the fact that it’s ultrasound rather than air really! But as the technology is refined in the future, I imagine you’ll be able to pick out a bit more detail. So, keeping in mind that it is still work-in-progress, I thought it was pretty cool
Of course, it’s never going to be better than feeling the actual physical object or a tactile model of it. Myself and most other people would prefer that I’d have thought. And there are limits as to how much detail you can feel through ultrasound waves I assume.
However, it could still be a pretty useful and fun way to interact with exhibits sometimes. The technology isn’t cheap, but then you wouldn’t need one pad for every object. Rather, you could have an interactive screen where you can select from multiple objects to feel in this way. It does also make me wonder if there are any uses for it in the home as well. Maybe there are online shopping scenarios, or even an online version of a museum, where you could feel the shape of an object, to get a sense of its size and structure. Interesting to think about anyway. It’ll be fun to see how it progresses in the future and what it gets used for, there’s definitely potential there.
So it was a great afternoon, and we all had a good discussion about everything we’d seen. Naturally, as we noted in our conversations, different people have different needs and use museums differently, so there’s never going to be a one-size-fits-all solution for everyone. So it’s important to give people the choice – some may want to feel things, some may want audio description, some may be happy with just large print, some may love the ultrahaptics technology, and so on. I personally like to combine what sight I have with any audio and/or tactile experiences that are available, as in combination they help me to understand and appreciate the objects I’m looking at even more. So I make the most of all the relevant senses I have as and when necessary, I don’t limit myself to just one.
Ultimately, it would be wonderful to be able to go into any museum and have the option of not just what to look at, but also how to experience it and get enjoyment from it. Some museums are making a good effort with this, offering things like audio description and large print guides and object handling opportunities, but some only offer them to a limited degree and others not at all. Hence it’s great that these studies are going on, so museums can continue to become more accessible, in order for more people to enjoy everything they have to offer. Things are definitely a lot better than they used to be when I went to museums as a child, and slowly improving as time goes on, but there’s still plenty of work to do as well.
It was also great to meet Rafie again, and I’ll be seeing her again very soon to explore the third and final museum in her own accessibility study. It’s a museum that I’ve never been to before, but I’ve been told about a very interesting app that I should be able to use in a very interesting exhibition. So I’m really looking forward to trying that out, and I’ll let you know how I get on of course!
2 thoughts on “Something In The Air At The V&A”