The Envelope Problem

Originally posted 2/29/16 on ENLIGHTED

Building scientists, architects, and engineers think of building enclosure in a very fundamental way as the “envelope” – the physical separator between the conditioned and unconditioned environment of a building including the resistance to air, water, heat, light, and noise transfer. Economically and culturally the building industry is roughly divided into two camps – what I call “outside the envelope,” mostly publicly owned, and “inside the envelope,” which is mostly privately owned. The “outside” ecosystem includes mostly public infrastructure like roads and street, sewers, bridges, parks, and the power grid; while the “inside” ecosystem is mostly private buildings – offices, stores, houses, institutions, factories. Each ecosystem has its own group of owners, investors, and consultants who specialize in planning, funding, designing, building and operating what gets built or rebuilt. One thing of rapidly increasing importance in the built environment today that is not contained by building envelopes is data.

The common connection between these two ecosystems is the electrical power grid, which is like the central nervous system for civilization. Information and energy (and especially information about energy) are increasingly flowing along the same networks. But because responsibility for each belongs largely to different groups, there are significant barriers to sharing applications and data across the boundary between private buildings and public cities.

A Fundamental Application Break
In order for networks and data in fundamentally different realms to exchange data, they need a mediating mechanism, like a translator between two people who speak different languages. But so far these are hard to find because of legal, economic, and security issues. According to Jay Shuler, president of Shuler Associates and a Smart City/IOT expert:  “It’s unusual for an application to address both environments, or for data to be shared between indoor and outdoor applications, but that is what is needed. And there’s a lot of ambiguity about who owns public data gathered by private agencies working for public entities, like when smart streetlights monitor crowd movement patterns, but that kind of data needs to be shared. It’s very easy for decisions that should be made rationally to devolve into emotion, politics and self interest.”

One such decision is now unfolding in a very public, painful, and portentous way with the recent conflict between the FBI and Apple over breaking the code on the iPhone of one of the perpetrators of the massacre in San Bernardino in December. Cases like this, where privacy and security run smack up against grave issues of national security, are bound to surface more frequently as buildings, cars, devices, streets, light fixtures, and of course, people throw off more and more data into the ether. Questions about who owns the data and when to employ deep granularity to any data set will only increase the more we employ sensor networks and advanced analytics.

Technical Issues and Tradeoffs
It seems that most of the technical issues people talk about in this area revolve around communications protocols – Zigbee vs wireless vs Bluetooth, etc. – and of course security. These issues overlap, as VLC (Visible Light Communication or Li-Fi) is being touted as more secure because it works only on line-of-sight and can’t pass through walls, and will eventually be resolved. Part of the reason that they’re so much the topic of discussion is that tech companies in the network, software, and hardware sectors know that determining and owning a standard greatly facilitates the rapid accumulation of wealth. But given the pace of innovation, which standards might eventually win seems to be increasingly difficult to project, and I think we should be seeing it more in terms of combined standards and protocols – mostly the more open source the better. Li-Fi, for instance, although it shows great promise because it greatly expands speed and available bandwidth, is uni-directional and not ready yet for commercialization. It may only take off when systems that combine it with other protocols that offer fast upload speeds are developed.

There are other technical issues that don’t seem to be discussed quite as much, specifically around data architecture. According to articles like this, we need a new one for IOT. The last estimate I could find of the number of photos uploaded to the cloud every day is 1.8 billion, and this has surely increased since then. And you may have noticed that we’re not all doing live video-chat on our wrist TVs yet, a tech development envisioned in 1964 in the comic strip Dick Tracy.  When we start doing that alone, not to mention all the other data gathering going on, the amount of data that will be collected, transmitted, stored, and analyzed is pretty much incomprehensible. With this massive increase in data collection globally comes a tradeoff between storage and computation, or between gathering and analyzing realtime data vs interval data, decision points that in many cases may want to be automated. A specific example of this might be when, during a terrorist attack, a masked perpetrator is captured walking briskly from the scene. Gait recognition analytic software can be applied to captured video and identify his unique gait pattern. This pattern is a much smaller data set compared with thousands of hours of video to analyze from other locations where the individual may be detected, but by filtering out a much smaller data set – basically vector points rather than full moving bitmap sets of images – targeting and filtering becomes much more efficient computationally. There are of course unlimited other scenarios for this that aren’t related to security and terrorism, but as security remains a top concern for everyone, we need to address one issue there that should be clarified.

Disaggregation of Identity Data
Most people take it for granted that ubiquitous data collection systems will basically collect everything that happens everywhere and retrieve anything at any time, including who you were with at Aunt Suzie’s birthday party last Friday and how much they had to drink. But per the example above, disaggregating granular identity information from massive data sets is a practical technical consideration as well as an issue of social justice. Very useful information can be gathered about how people move in spaces, what their emotions or health conditions are, what they buy, and how they drive without drilling down into their individual privacy – in fact doing so on a wide scale is highly impractical. Of course, like in the Apple case, the ability to drill down to the individual and access their private data is always there somewhere, and the debate now is, as it should be, about who gets to do that. So the real important discussion is not about the technology, what it can do, and what standards it will use, it’s about how we use it to improve our lives and our society on a global scale. Whatever the outcome, the Apple vs FBI case will have wide and immediate repercussions all over the world, especially in countries like China.

Collecting and Analyzing What Matters
There are many immediate practical reasons to cross the building envelope data barrier between inside and outside, private and public. Sharing building energy use data alone represents a huge resource for improving efficiency and stimulating innovation, and the federal government has created the Energy Data Initiative to facilitate this. As more granular energy monitoring systems for use “inside the envelope” evolve that provide useful data to utilities, public agencies, and regulators, everyone stands to gain by getting a more accurate picture of how to implement efficiency This becomes even more important when distributed generation and Smart Grid technology helps cities to evolve into net producers of energy.

Other important areas where data should be shared between inside and outside the building envelope include: health and wellness; environmental measures like pollution, biodiversity, water and air quality; traffic and parking; retail activity; pedestrian activity. We’re at the very beginning of our understanding of how these powerful tools can be used to improve the quality, sustainability, and economic vitality of our cities and buildings. We don’t experience the world as two completely different universes, public and private, indoors and out – we move freely between them constantly. We need to break through the “envelope” in how we use data in the built environment.