"But the big win from self-assembly is adapting to changing demands and environments-in particular, addressing situations and tasks that the robot designer doesn't even know about at the time of construction."
This brought to mind Cognitive Work Analysis and Ecological Interface Design's system-constraint-based approach to control systems design, propounded by Jens Rasmussen, Kim Vicente, and others. One of the benefits touted is provision for scenarios beyond those that can be anticipated reasonably by the system designer.
The Discover article states that software has configuration-flexibility advantages over hardware. That is true to some extent. However, I wonder how far are we from being able to create interfaces that can reconfigure themselves (whether based upon genetic algorithms as with the robots in the article or...) to optimize the user experience in unanticipated circumstances?
Is anyone aware of any exploration or developments in this area?
Ah well, back to my late-twentieth-century technologies...