Sunday, April 13, 2014

Sentio - The Paper

The paper below is the SpaceHack 2014 Submission by the Sentio team.  We formed at the Adler Planetarium in Chicago, Illinois as part of the NASA Science Hack Day 2014 challenge.

Sentio – The Perceptive and Effective Spacesuit


Kent Nebergall – knebergall@gmail.com
Alex Maties – goemailalex@gmail.com
Janet Ginsburg – jaginsburg@gmail.com
Julieta Aguilera – jaguilera@adlerplanetarium.org
(Open source in accordance with public domain laws.  Credit for creation and development to the team listed above.  Reuse with credit to creators and fair use allowed.)

Abstract

The Sentio EVA system that maximizes situational awareness and manipulation of the space environment across a human/robotic group.  Machine intelligence flags unusual objects from any sensor for investigation.  Robots follow crew and ping the subsurface with an improvised phase array.  Gloves are replaced with a box with hand tracking that controls both a robot hand and power tools.  Spacesuits and robots have status lights that indicate status. Stereo headsets help give situational awareness.

Human capacity and sensation


Exploring space by spacesuit is a bit like taking a young child to the zoo for the first time, then wrapping them in a scuba mask, earplugs, oven mitts, and nose plugs.  They will learn about as much as watching a movie and will be exhausted doing so.  They will be largely ignorant of the environment because they were largely unable to acquire information about the environment. 
Our senses and abilities are well adapted to Earth.  We function very well in the limited spectrum coming through the atmosphere, the sounds common to Earth, the pressures, threats, and smells of food or water.  We are more geared to escape large predators and diseased foods than solar flares.
Worse yet, when we do visit space, we encapsulate ourselves from the environment and cut our senses dramatically.  We operate in a full spectrum environment, yet cut our field of vision to the limits of the space helmet.  We cannot hear, smell, or taste anything in a vacuum.  Pressurized gloves both limit our sense of touch and are difficult to use when gripping tools or hand holds. 

We go to the most interesting places dressed like the coccooned child, and we call it exploration.  We are oblivious to solar winds, flares, meteor showers, seismic waves through the surface, or temperature. 

OODA loops and maximized science

Fighter pilot instructor John Boyd invented the OODA loop.  OODA stands for Observe, Orient, Decide, Act. The general principle is that the one who can cycle through this loop faster and more accurately than the opponent wins the fight.  In a spacesuit, we’ve noted that Observe and Act are highly limited, and Orient in microgravity or on a surface without GPS is also a challenge.  This dramatically limits our perception and action in EVA work.  We need to design a system that maximizes observation, maps the observations correctly via orientation in space, uses machine intelligence to flag anything out of the ordinary or significant, and leverages tools with maximum comfort to the crew member so they can do the most work with the least effort.
In summary, our new loop is Augment, Amplify, and Expand.  We maximize our ability to both perceive and act within the environment, and add new capacities to the human body via sensors and advanced tools.

Sentio – perception system

Sentio is an older Latin/Germanic word for the senses.  Our goal is to extend the senses of an Extra Vehicular Activity (EVA) sortie.  Much like a modern battlefield network, where any perception by any unit is relayed to all units, the EVA system links robots and humans in a single perceptive net.
Space is not a native environment to humans.  We may, by amplifying our senses, find correlations, opportunities and threats.  By meshing them across a broader group of people and robots, we can extend our individual senses across a wide area to replicate the situational awareness we get on earth from sound, smell, and peripheral vision. 

Goslings - Robotic Sense Extension and Sharing

Modern EVA work involves a buddy system where at least two astronauts are outside.  In an advanced environment, we are anticipating each astronaut being surrounded by six smaller robots.  Since these robots follow each astronaut and “bond” with them while “chattering” about what they notice, we have named the small robots “Goslings”, since they follow their astronaut like baby geese following their parent.
In this top-view illustration, the goslings are the small blue boxes.  The astronauts are the yellow icons.  The white octagon is an object that is not interesting.  The robots have flagged the orange octagon as something for investigation.  Since two people are within the same "bubble" the goslings have formed two concentric rings like electron shells around a growing nucleus.  With the crew suits and the goslings, there are now fourteen sets of scanning electronics surrounding the object.  The inset is from my work at Mars Desert Research Station on crew 124, where we were able to use a remote control robot to simulate some of these issues.


The machine intelligence of these robots is limited but effective.  They have an animal-like flocking or schooling behavior around the astronaut.  They tend to stay equally distant from the crew member and each other.  If two humans come together, they form two concentric rings and link as a single mesh.  They are designed to sense multi-spectral images of their environment.  In addition to navigation and mapping, the data is processed in real time to flag any rock or object that is unusual in color or combination in visual spectra, and pre-process that information for the crew member.  On Mars, it would also contain rudimentary amino acid sensors and other passive life indicators to flag any suspect matter under the rover.
The astronaut and robot cameras and communication systems would operate very similarly.  If an astronaut said they found an unusual rock, the headphones would not only communicate the voice message but orient the direction and distance of the speaker, and lights on the helmet would flicker in synch with the speech pattern.  This combination would become intuitive to crew members to maximize verbal and non-verbal communication.  You would be able to tell which person was talking to you even when their back is to you, or which robot voice is talking to you.  It would also flicker on a HUD map.
Goslings spread out ahead or around a crew member would allow area-level radar mapping of the subsurface.  The matrix of robots could synchronize a radar ping to the subsurface at different frequencies, intervals, and locations to do higher fidelity maps at shallow depths and lower resolution, high depth echoes.  Instruments that read cosmic ray backscatter are much higher resolution at short distances or longer exposures, so a moving cordon of sensors would give an excellent profile of the surface and subsurface.  Correlations of surface materials and sub-surface densities could be used to further characterize exposed strata as the robot group moves across the surface.  With the spread, high power array, it would even be possible to ping upward to track incoming small meteors, Although the warning time for large rocks would be so short as to be useless, it may indicate a small, uncharted meteor shower from a dead comet is in progress, and may warn the crew before they otherwise would be aware.  A large phase array of transmitters could also send burst communications deep into space for direct communication with satellites or earth, as appropriate.  Goslings would also listen for seismic issues, both to listen for threats and to follow the footsteps of crew on the moon or Mars.
Also, by keeping rings around crew members, we can detect any threats before they are stumbled upon in terms of loose soil, cliffs, and so on.  A similar system could be used by soldiers, emergency response teams, or smoke jumpers to observe conditions over a wide area. 
Tools, emergency supplies, and sensors spread across crew members and robots could also form a combined inventory to better equip each EVA to be safer and more productive.  A system could allow any astronaut to ask which person or robot is carrying a particular tool and summon it.  If a tool was left behind, a robot could be sent to retrieve it. 
For deep space and asteroid missions, the goslings could be free-flying robots that perform the same functions and share some of the same electronics and sensors. 

“Mood Lights”

A team leader at a construction site can gauge the stress, productivity, and mood of a work crew by walking near the work site and noting nonverbal communication.   Think of a construction team leader walking through a group of workers, gauging by noises and movements who is too fatigued, who is depressed, who is angry, who is enthusiastic, and so on.
But spacesuits have to have color stripes around the legs so that mission control can tell which astronaut is which on a spacewalk.  Mission Support can monitor health from thousands of miles away, but the crew member next to the one in distress may be oblivious to the situation. This drastically reduces the social cues and non-verbal communication vital to teams working on tasks. 
The suggestion here is a lighting or other display system that flags the health, stress level, and relative exhaustion of each crew member during a physical EVA.  It would also use a similar system to flag weak batteries or a stuck wheel on a gosling rover, but at a slightly different color to avoid confusing a distressed robot from a distressed crew member in an emergency.  Similarly, the mood lights could flag a robotic discovery of something unusual or the fact that one person or robot is actively communicating. 
If a person or robot speaks, or a piece of equipment sends an alert, the stereo headphones in the helmet would reflect where the signal was coming from and roughly how close.  When the crew member turned or looked to see who was speaking, the lights would flicker in synch with the voice to intuitively show identify the speaker.  This is currently done on teleconferencing software interfaces for the same reason. 
If someone has a health issue, the same system can flag a read light or read off vital signs visually, flickering pulse and slow fading breathing rate in another LED and color, along with blood oxygen level and so on.  Even a remote viewer on camera would be able to read the situation with no other data. 

A force feedback "empathy" suit

The separation, enhancement, and transmission of environmental data from suit to suit and with robots can go beyond telepresence applications.  One practical application has to do with medical response when someone has an accident on an EVA.  First responders refer to the golden hour and the platinum half hour in terms of the time between injury and treatment, and the value of introducing treatment as quickly as possible.
With a traditional suit at MDRS, if someone falls off an ATV, the person in front has to identify that fact with a small arm-mounted mirror that they should slow down and periodically check.  If this happened, and the person was conscious, they would have a short dialog about their injuries, the extend and locations of them, and so on.  In a real spacewalk, mission control would have access to their vital signs.  But lets add another layer.
Say we make the spacesuits with a series of force-feedback instruments throughout the body.  If someone took an injury, the HUD would identify who, and the 3D sound would give the direction and distance to the injured person instantly.  We could also convey the location of the injuries by pulsing the force feedback patches around the receiver's suit to show that the injury was to a leg, the back, or whatever.  The entire crew would know almost as quickly as the human mind and body can map information the nature, location, and severity of the injuries of a crew member in near-real time.  This would shave precious seconds off the response time, while also minimizing the chance of exacerbating the injuries by being ignorant that the person hurt their back as well as their leg, for example.

Hands – introducing Marionette

As noted, hand fatigue due to pressurized gloves is a big issue for astronauts doing work.  Gloves have to use heaters to deal with thermal stress of day and night cycles in orbit.  Thick gloves also make it harder to feel tools and handholds, and easier to accidentally drop these things.  It’s also very hard to pry rocks out of the soil in heavy gloves without a sharp instrument. 


The Marionette is a box that contains the hand and various input and tactile feedback systems.  The hand may be mapped fully with a data glove, but the box volume kept to a minimum by having the robotic system scale up any movement. This would be adjustable for fine work or gross movement. The box would contain tactile, force feedback, and any other technology judged appropriate.  Since this technology is moving so quickly, it’s unwise to predict what would be packaged in the hand “theater”.  A Leap Motion, data glove, and simple multi-button control were suggested, along with texture feedback (haptic displays that distinguish smooth and rough surfaces), force feedback gloves, and temperature thermocouples.  A haptic sensor could warn the user of tools if a drill bit were about to break or overheat, or of other hardware risks.  Soft feedback based on sensor input or pre-made maps of the interiors of surfaces may indicate things to avoid or hit while drilling.
There would be a human-like robotic hand at the end of the theater box for any functions that the hand would normally do, but even this could be swapped with a larger or smaller manipulator.  Around the sides of the box, there would be mounts for power tools, displays, sensors, and rough hand tools such as rock hammers.  The box would be combined with a forearm “bracer” to maximize leverage and stability of the mounted tools.  A person could have up to five manipulators mounted at once, but would probably not because of weight and inertia.  They can be tools, sensors, cameras, or a display along the wrist to show tool and work status at the same focus depth and field of vision as the work being done.
To maximize leverage and matching our intuition about fingertips inside, fingernails outside, we would favor mounting harder tools on the outside/back of the forearm and soft tools and sensors inside.  Tools may be carried by goslings or kept in a tool belt at the hip.
A hard tool may be a hammer drill for core sampling, a wrench for maintenance work, or a rock hammer.  The drill bit or other wear items may give feedback to the user directly via a physical sensations in the drill mount to the wrist, or electronically via the haptic/etc. feedback inside the glove.  It is noted that we respond faster to impulses closer to the brain, so a failsafe may flag the forehead with a prod or slight shock rather than the hand to warn of a pending broken drill bit. 
An example of a soft tool would be the so-called “coffee ground hand” that was developed for a robot.  This was a soft rubber ball filled with coffee grounds and using a vacuum hose to suck air out or blow air in when needed.  When the air was withdrawn, the ball locked into its current position with the coffee forming a matrix that lacked viscosity.  So the ball could morph around a pen, the shape be locked with the vacuum, and it would pick up the pen well enough to draw with it.  It could then release the pen by pumping air back into the ball.  This would be a good system for soft pickup on the wrist of the tool, because it would not require any mechanics to reach and grip, and would be forgiving of collisions when not in use.  It would also allow someone to manipulate objects close to them rather than at the length of the arm. Since the soft gripper has two settings (grip and release), the operator could use gesture controls such as snapping their fingers to pick up or drop objects.
We could also mount a soft gripper that sucks air out of a granular matrix to grab objects and forces air back in to let the objects go.  Additional “hands” and tools could be carried in the tool belt or by a gosling.  A human-like robot hand with sensors could be mounted on the end of the box to extend reach.  People working with such an extension, if the controller mimicked the movement and feel of the hand, would eventually map their movements naturally to the virtual hand and feel odd taking the suit off and suddenly having much shorter arms. 
We may want to use this with only the dominant hand, so that an equipment failure wouldn’t potentially leave the user unable to use hands at all.  We may also have a glove system inside the theater that would shield against vacuum if inflated so that the theater could be removed in the field in an emergency.
A typical marionette system would have a power drill on the back of the wrist, a soft-grab tool on the inside of the wrist, a virtual hand at the end of the box with lights and cameras, and a display facing back from the top of the wrist.  The underside may have a hand lens camera with several spectra and an ultrasound touch probe.  The display would ensure that information would be conveyed at the same focus depth and field of view as the work being done, rather than on a heads-up display. Since this is modular, this list is far from complete.  They could also be engaged to gosling manipulator arms, drive vehicles or construction equipment, or do microsurgery or similar fine work remotely.
This force augmentation may not be limited to Marionettes for the hands.  Stilts with gripping feet like monkeys or climbing hooves like goats could eventually be added for microgravity or mountainous environments.  Foot controls could also work simple jet packs at the ISS while the hands were freed for tool use. 
Much as earthmoving equipment eventually feels natural to the operator, the “orangutan arms” would eventually be impressed as “live” limbs on the operator.

The Prototype

Team member Alex worked all night on a prototype Arduino glove with the following options:

  • Based on a temperature sensor, it would convey temperature to a thermocouple inside the enclosure.
  • Based on a pressure sensor, it would "bump" the arm with a servo.
  • Based on pushing a button, it would spin the "drill bit" on a servo/tool attachment.
Unfortunately, at 4 AM, a misplaced soldering iron ruined the prototype.  A model of it was hastily assembled for the presentation with the sensors and glove, but we couldn't demonstrate it.  

Augmented sight

Much has already been said and done on the subject of heads up displays (HUDs) and new augmented reality and computer interface systems such as Google Glass and VR headsets.  Given the set up described so far, we introduce some new interface and observation options beyond what has been covered in other work.

The camera helmet may have cameras that look in 360 degrees in UV, visible, and infrared light.  Spectral imagers and filters could be cycled.  The inputs may be mapped to satellite data and past observations, both to show depth and to add the dimension of time to the visual inputs in contrast with the present.  Subsurface radar studies can be overlaid on the three-dimensional model, and any live seismic data.  Magnetic field localizations and radiation data would also be input both from the suit and the field team. 
This is a massive data set at the disposal of the astronaut.  To filter it, machine intelligence would look for anything unusual, unexpected, potentially valuable, or potentially threatening.  Like an “identify friend foe” (IFF) system on a fighter jet, it would prioritize scientific, mining, and hazard conditions and flag them on the display for investigation or avoidance. Voice controls could send the goslings ahead to different targets.  The dimension of time would allow crew to perceive shifts in the geology or weather.  A sky overlay could give a subtle indication of local magnetic fields and space weather as detected up to satellite level. 
Since the cameras are full circle but the head movements are restricted by the helmet, rotating one’s head within the helmet to the limit of motion could tell the camera view to keep rotating, or even look through the body, to show a dropped tool or position of the hand. 
A crew member could look across the landscape at the team members, ask for current state on surface conductivity, and see readings pinged for one second to both the “mood lights” and the electronic display map.   They could also flip to any crew member or gosling field of view, and rotate through archived images, spectra, and distance from micrographs to satellite as available.  Radar surveys of the subsurface could also be dialed in and overlaid within the field of view.  Maps and other archived data would be time stamped with a frame to indicate it was not live. 

Augmented hearing

As noted, we can stereoscopically map the location of signals from the team via a set of 3D headphones to give greater intuitive situational awareness.  While fan noise and other irritants are a problem with spacesuits and spacecraft in general, we would not want active noise cancellation to be too good, because it may block awareness of problems with the suit itself.  This is also why we do not want to broach the subject of artificial smells, because it may interfere with noticing smoke or the smell of heated plastic inside the spacesuit. 
As people age, their frequency range of hearing diminishes.  The headset could be mapped to the limitations of the user, so that any input is maximized for the recipient.
In addition to voice communication from humans or automata, computer systems may make distinct noises to indicate status, discovery, or threats.  A situational matrix of seismographs could be extended across the entire group to give subtle knowledge of any small events, along with magnitude, bearing and distance, down to the crew footsteps.  On Mars, wind noises can be magnified, and any local wind events like dust devils in the distance can be heard.  Satellite feedback could indicate an approaching dust storm with a distinct set of noises.  On the moon, the local radiation, magnetism, and other invisible phenomena could be anthropomorphized as wind noise or simulated radio static, with the frequency matching the spectrum and the volume matching the flux of the current radiation environment.  A small solar flare – not enough to shelter but enough to avoid where possible, could sound like an approaching rainstorm or something else our intuition would recognize.  Incidentally, these cues may help the “homesickness” of crews as well as give them a more intuitive experience and understanding of the subtleties of the world they are exploring.  For Earth use, similar systems could warn of approaching wildfires or bad weather to those threatened by them. 
For the Moon, sound systems could be tied to tools, and the feet of the explorers, and the seismographs of the goslings.  This would be analogous to elephants detecting infrasound through their feet.  On Mars, this would be augmented by magnified sound mapped to the field crew and virtualized to each location electronically.  Ultrasound and infrasound can also be mapped to the user’s personal auditory limitations. 

Augmenting other senses

The only senses remaining are taste and smell.  A geologist, who goes through the "tick, lick, lupe" method of examining rocks may find a taste simulator useful.  Tick means using a rock hammer to expose an unweathered surface.  Lick means literally licking the exposed rock to determine granularity, salt content, etc.  Lupe means looking at the now-wet rock with a hand lens, as the wet rock is easier to see crystal formations and so on.
In space, this physical action is impossible.  Chemical and related sensors on the suit and robots would perform this analysis.  It may be possible to make a taste simulator that would optionally be touched when needed like a drinking straw in the helmet.  I don't foresee it actually happening, but it may be of value and is worth noting.
As for smell, and other touch senses such as heat, cold, electrical shock, and so on - the problem is that those actually are practical senses to have to determine if there is a problem with the suit.  I would not want actual smoke or electrocution in the suit to be mistaken for an artificial signal.  Therefore I would discourage that interface from being considered in too much detail.
We also associate these senses with food.  One of the rewards of returning from a strenuous EVA is the warm smell and taste of food, and this has a social bonding and rest attribute with the crew.  We also do not want to create too much artificial noise in a natural signal that is vital to crew psychology and morale.  

Avoiding machine learning limitations and confirmation bias

In a situation where machine learning and virtualization make assumptions prior to giving feedback to the astronaut, there is a risk of unintentional blindness or the digital equivalent to optical illusions. We already have this in the first few layers of the retina of the human eye, which generate several optical illusions as processing artifacts prior to visual information going to the optic nerve.  Digital data compression may filter “noise” that is actually signal. 
The best counter to this may be to minimize over-selection.  If the machine learning for exploring a new environment should be designed to flag anything unexpected without overclassifying it.  It should group items that match expectations with outliers and allow the astronaut to decide what is important.  All anomalies should be logged regardless.  There should NOT be an assumption that stray data is noise.  Excess noise is itself a signal that there is something there, or excess noise due to equipment failure is an indication that something must be tuned or fixed.  Human cognition may make connections (based on education, experience, insight, and intuition) that machine learning does not based on data alone.
Business intelligence systems are designed to give clear single versions of data following established rules of simple math.  They are not designed to explore the unknown, because dollars and inventory levels do not magically become euros and water levels.  We cannot overapply machine intelligence models designed for business to the role of exploration. 

Conclusion:  Going native in new frontiers

Our minds and bodies are sculpted specifically for the Earth environment.  As we have expanded from the warmer climates to harsher weather, we have developed clothing and shelter as shields that separate us physically from those harsh environments as appropriate.  We have expanded the shielding to the point in space that we can travel beyond our planet while limiting our perception to that of a toddler.  So as we have expanded our range, we have stepped backwards in perception.  We already do this going from climate controlled homes to vehicles to offices and back with minimal interaction with our world.
We have dramatically increased the noise of virtual interactions of smart phones and other virtual environments.  This is forgivable because we no longer have to hunt antelope or elude lions, although we may still hunt sales and dodge traffic.  But as we enter space, we encounter new dangers from radiation and opportunities in mineral wealth that our bodies are entirely blind to even with a hunter’s attuned senses.  The Sentio reverses this trend of isolation and distraction to make us true explorers of new worlds.  By maximizing our experience of these worlds, we will more quickly become natives of new worlds with native skills, inventions, and new frontiers in science, technology, and the human experience. 
The immigrant arrives in a foreign land with strange customs and no map.  He may learn the land and language but never looses his accent or forgets where he was born.  His children know both languages and customs, and flow comfortably in both worlds.  They may visit the old world, but do not consider it home.  The next generation does not speak the language of the grandparents nearly as well, knowing mostly what is told to small children in love or anger.  The fourth generation may not even know what country the great grandparents were from.  To this child, the original home is completely foreign, with an unpronounceable language and no map.  Only the last name remains to indicate the child's heritage.
While the "transhuman" implications of this technology metaphor are certainly visible and implied, they may be simply a matter of degree.  If we become used to wearing suits that extend our senses to more of the spectrum and to greater distances, we can remain completely human, just as we are with world wide communication or faster travel.  But we will feel very strange without that dimension of perception or action.  It is another step out of the comfortable savannah and into a colder, more challenging world.  But we will be ready. We are natural tool makers and explorers.  After so many millennia, we've gotten pretty good at this.

UPDATES:

Janet on our team wrote the article linked below about our development process.
Tracker News Dots


Also, here is our video on YouTube, created by Julieta:
SENTIO Teaser Video

No comments:

Post a Comment