Information Technology (IT) and Operations Technology (OT) are two distinct yet interconnected fields that play critical roles in modern organizations. IT deals with the use of technology to support business processes, while OT focuses on the use of technology to control and monitor industrial and commercial processes in facilities. By looking at IT vs OT systems, it’s easy to identify their major differences.
What are IT Systems?
IT systems are primarily used to support business processes, such as data storage, processing, and communication. These systems include things like enterprise resource planning (ERP) systems, customer relationship management (CRM) systems, and enterprise-wide networks. They are responsible for maintaining the flow of data within an organization, and provide important services such as email, file storage, and data analysis. IT systems are also responsible for maintaining the security of an organization’s data, including firewalls, intrusion detection systems, and encryption.
What are OT Systems?
OT systems, on the other hand, are used to control and monitor industrial processes. These systems include things like programmable logic controllers (PLCs), distributed control systems (DCSs), and supervisory control and data acquisition (SCADA) systems. They are responsible for controlling and monitoring the physical processes within an organization, such as manufacturing processes, power generation, and water treatment. OT systems are designed to operate in real-time and are often required to operate 24/7.
When we look at IT vs OT systems, trends show they are increasingly being integrated to improve the overall efficiency of companies and facilities. For example, a building owner might use data from an OT system to optimize their HVAC systems, or an energy company might use data from an IT system to identify and respond to potential power outages.
Network Security
One of the major differences between IT and OT is in the level of security required. IT systems are typically more connected to the internet; hence they are more exposed to cyber threats. These systems need to comply with industry-specific standards like the Payment Card Industry Data Security Standard (PCI-DSS), HIPAA and SOC2. Organizations need to maintain regular backups, have intrusion detection and prevention systems, as well as have strong and regularly updated access controls in place.
OT systems on the other hand, are typically more isolated from the internet and have fewer connections to external networks. These systems need to comply with standards like IEC 62443 which are specific to industrial environments. Because of the real-time nature of their operations, organizations need to have redundancy in place and maintain backups that can be restored within minutes, have detailed incident response plans, as well as maintain physical security of the systems.
Conclusion
IT and OT systems play critical roles in modern organizations, with IT systems primarily focused on supporting business processes and OT systems focused on controlling and monitoring industrial processes. The two fields are becoming increasingly integrated, with organizations leveraging data from both types of systems to improve overall efficiency. However, they are also vastly different in terms of the level of security required, with IT systems being more exposed to cyber threats, and OT systems being more isolated and needing to comply with industrial specific standards.
Although often overlooked by building managers and engineers, data schemas are essential to the efficient building management, data analysis and system automation. That’s because schemas are the building blocks of effective database management. Without them, you foreclose your property’s potential to save energy, adopt tech, and compile valuable operational data that make your buildings run at more efficiently and at lower costs. But what are schemas and how do they work?
Database Schema Basics
Databases of all kinds must be organized in pre-determined ways. Otherwise, it’s impossible to store and retrieve data in any workable sense. Think of a schema as a naming standard “language” for how you write, store, and retrieve the information about your building—from the status of its assets to the historical data around energy use.
Just like any language, schemas have rules and conventions. Language has rules around naming things (e.g., noun, verb, etc.) and grammar (subject + verb + director object). If we don’t follow the rules, communication turns into confusion or completely breaks down. In the same way, database schema standards outline how things are stored, what they’re called, and how they’re related (i.e., relational database).
Schemas deal in metadata or “data about data”. For example, books have metadata in the form of their title, author, publisher, or call number. In the same way, buildings have data about their assets, such as asset name, location, site, or type.
For managers and engineers, schemas make recording and managing your asset database easier by ensuring your library is mapped, tagged and organized in a way that’s easily understood by machines and software. So, these standards are intended for both building owners and developers, ensuring both parties are speaking the same language.
Too often, managers and engineers use schemas customized to their site or ad hoc naming conventions that get lost when buildings change and people move on. Such informality creates confusion over time, but maintaining a standard schema ensures your software, BMS and assets can always communicate effectively.
Basic vs Advanced Schema
Some schemas are basic, recording only a few pieces of metadata (e.g., asset name, location, serial number). Other schemas are complex, recording many pieces of data. The more complex your schema, the more descriptive it is, and a more description means a “deeper” more powerful database, just as a long sentence is more descriptive than a short one. For example, consider the following two sentences:
“The dog fetched.”
“The black Labrador fetched the yellow tennis ball from its toy box.”
What are the major differences between these two sentences, and (more important) what can we do with the second sentence that we can’t do with the first?
For one, Sentence 2 contains more descriptive words (“black Labrador” “yellow” “toy box”), so we have a better understanding of the context. Second, the shorter sentence lacks an object. We know the dog fetched, but we don’t know what it fetched. The second sentence tells us—it’s the ball. In the longer sentence, we’re even given information about the situation (i.e., the Lab has a toy box). More importantly, Sentence 2 creates a relationship between the subject and the object. We can say, therefore, that the longer sentence is “relational” in that it describes how one thing (the dog) is related to another (the ball), which is related to another thing (the toy box).
These same differences exist between informal and standardised schemas. Longer, more descriptive schemas provide more context and meaning around a building asset. They’re also relational, in that they describe how one asset (e.g., temperature sensor) is related to another (e.g., AHU). Consider these two naming schemas for a temperature sensor housed on Level 9 of a hospital.
While the basic schema lists only the location (LV09) and asset name (TempS), the advanced schema extends the description to include the building, system, asset type, point type, specific location, and the device class. With these added details, we now have a relational description of the sensor. For example, we know it is part of the mechanical (M) system and part of an AHU. Therefore, we can say Schema 2 is part of a relational database, and that it gives us a greater understanding of the asset and its place in the system.
Overall, Schema 2 gives us more context and meaning than Schema 1, and we can use this information to learn more about how our buildings operate. Once we extend this schema strategy to our entire building, we have a powerful way to analyze its contents and functional efficiency.
Schema Benefits
There are many benefits to adopting and maintaining a standard database schema. Here are a few of the most important.
Software Deployment
Standard schemas create a common lexicon and database structure for software developers to use. Adopting a standard naming schema makes software deployment and management much simpler. Developers and building systems benefit from a common, predictable set of rules and naming conventions. Such standards make software development and deployment easier and cheaper because both stakeholders are working from a shared data structure. The developer can simply bolt their software package to your system, and everything works out-of-the-box.
Advanced Queries and Dynamic Lists
Conventional BMS pages are static. Their queries are hard-baked, with pre-built graphics that deliver data around points such as fault detection, temperatures, run speeds and statuses. They are “static” in that their queries never change. Your BMS will only “ask” specific questions about your system. They may be important questions, but they are, to be sure, limited. Contrary to their appearances, however, buildings aren’t static with respect to the data they produce, and managers and engineers often need to run queries and generate dynamic lists that exist outside the BMS purview. Using a relational, standardised schema allows this limitless flexibility.
For example, say you suspected one of your AHUs was starting to fail. You could run a query that identified all room temperature sensors that have been reading above 21 degrees for the last 24-hours for that specific AHU. If your schema is relational, it understands which specific sensors to target. You could then upload the data to a dynamic page to help troubleshoot performance issues. Dynamic lists like these can improve predictive failure and shorten downtimes.
Asset Replacement
With a standard relational schema, you can identify an asset’s effect on the system and impact to service. For example, a standard schema can show you the effects to other systems when you plan to replace a failed actuator. Before work begins, you can ask questions like: “Will replacing the actuator stop chilled water to the whole building or just the data center?” or “How will the replacement affect Tenant X, Y and Z?” Such insights give you and your service engineers the right information for estimating costs, cutting downtime, and ensuring better tenant outcomes.
Updating Building Data
Buildings go through many evolutions in their life cycle, and these changes affect your asset database. Standard relational schemas make updating metadata much easier and more accurate. Recording changes only requires updating one specific piece of data, like a room number or new part. After that, your system automatically adjusts names and relationships, both upstream and downstream. Standard schemas cut the time and costs of updating asset databases.
Popular Schema Standards
Today’s most popular standard schemas differ in their approach, but all attempt to standardise asset description and storage to aid interoperability and software deployment. Project Haystack is a tag-based schema focusing on streamlining operation between smart devices within buildings, homes, factories, and cities. The Brick Ontology standardises both asset labels and connections, allowing the user to create a relational database.
Conclusion
It’s difficult to make big data work for you without first putting it into a standard structure. Schemas are that structure—they’re the digital architecture of your building systems. By building your asset database with standard schema, you’re ensuring your building, tenants and occupants benefit from future invocations such as advanced analytics, AI, machine learning, and cloud computing. These are the future of building operations and facilities management. Once all buildings graduate to smart status, they’ll be connected to everything, and proptech will help managers do everything from calculating asset depreciation to managing carbon emissions.
Properties need effective cybersecurity measures. Cybercriminals don’t just attack high profile companies and governments; they target small to medium businesses too. Computer viruses range from annoying adware infiltrating your browser to costly ransomware attacks. In 2021 the world saw a 105% jump in ransomware attacks. Healthcare alone saw a 755% increase! Businesses are paying out billions each year to save their proprietary and/or customer data—and paying only makes things worse.
The sharp rise in ransomware has forced building owners to take a serious look at their IT infrastructure. This is alongside adapting to the challenges of the pandemic and managing a remote workforce. Interestingly, some security experts point to remote work as one cause for the increase in ransomware. Since employees are no longer behind corporate firewalls, their home-based laptops and mobile devices become “attack vectors” for gaining entry to company networks.
Remote entry points are also an issue for building control systems. As buildings become more connected and “smart”, the threat of data breaches increases. That’s because system integration, IoT devices, and building automation systems (BAS) increase connectivity and wireless operation. It’s a problem the U.S. government has known about since 2015 after the GAO warned of a 74% jump in cyber incidents involving government-owned industrial control systems.
Building control systems like BAS/BMS connect hundreds of devices and sensors that make up systems like fire, access, HVAC, electrical, and lift. Connectivity makes it easier for cybercriminals to make their way to more sensitive data because there are more paths to follow. Wireless and IoT devices make networks vulnerable to remote Wi-Fi exploits and password hacks. These potential data breaches and financial losses from malware are why property teams need to practice effective cybersecurity habits.
Setup Multiple User Accounts
One good security habit to adopt is proper account creation and assignment to your team. To save time and hassle, some building managers create and share one master admin account amount their team members. It’s tempting when someone needs to make a few quick changes to simply email your login and password. However, this puts your BAS at risk of cyberattack if those credentials are misplaced or abused. To be cyber safe, create both admin and user level accounts and assign them to each employee.
Almost all BAS software lets you create multiple accounts and at various levels of access. Individual account creation does three key things:
It ensures inexperienced members aren’t given access to critical controls.
It makes sure user actions are recorded by the system.
It helps users work more effectively.
Modern BAS systems track what users do, which is helpful when things in the system are improperly changed. If everyone signs into the system with the same account, then you can’t tell who did what and when. This can slow down repairs and troubleshooting because you must rely on faulty human memory instead of an accurate digital record. Also, when inexperienced or new users sign into an admin account, they may spend an inordinate about of time searching for the tool or feature they need. User-level account interfaces are simplified for this reason. Too many options can tank productivity by forcing workers to waste time navigating a complex interface looking for a single item.
Password Creation
Creating strong passwords is one of the most impactful cybersecurity habits you can adopt. Too often folks continue to use highly predictable pass codes (e.g., “123455” or “Qwerty”) to secure their most sensitive data. What’s worse, most of us also use these same flimsy passwords for all our accounts. It’s behavior that’s too predictable, and predictability is the Achille’s Hill of security.
Make sure your team knows password best practices. When it comes to password creation, length and complexity matter. Passwords should be at least 8 characters long, include special characters (e.g., @!&), and numbers. The longer the password the better; however, there’s a limit to how many characters a person can hold in long term memory. To combat the memorization problem, use passcodes instead.
Passcodes are acronyms made from random words or long sentences. To create a passcode, use the first letter of each word to form your password. For example: “My cat whiskers is 3 years old and likes to have her belly rubbed.” This sentence (which is personal and easy to remember) becomes the password: “mcwi3yoalthhbr”. Then, swap out a few special characters, and you’re good to go.
If passcodes seem too complex, make your life 100% easier by simply using a password manager. These cloud-based apps create and store complex passwords in the cloud for you. They will even fill in the form fields for you, saving you valuable time. Most apps have free or inexpensive annual plans, so investment is minimized, while time savings and security are maximized.
Suspicious Link Detection
A building’s devices aren’t its only weak spots. In fact, occupants are often the major sources of malware. Cybercriminals can use social engineering to trick employees into opening phishing emails and navigating to fake websites. The tactic is called a “pharming attack” and is a common way for hackers to steal an employee’s username and password. The fake website looks and feels like the authentic one, but it’s a duplicate. Employees unwittingly enter their username and password, which is recorded and used to gain entry to the account.
Hackers design phishing emails and fake websites to look like official corporate digital assets, often using the same branding, logos, language, etc. Most are convincing enough to fool an employee who’s under a bit of stress and/or not paying attention. However, there are a few tell-tale signs to look for:
Salesy Language. Cybercriminals often employ high-pressure sales language or scare tactics. Phishing emails may claim “suspicious activity” or fake “charges” to user accounts to entice holders to hastily move to fix “issues” without first confirming the source of the emails.
Grammar mistakes. Often cybercriminals don’t speak your native language, so look for any grammar mistakes or misspellings. These are extremely rare in authentic corporate emails and are a sure sign of a fake.
Pixelated logos. Hackers use official logos to trick email recipients, but often these logos are hastily copied and pasted from websites and may be incorrectly sized resulting in pixelated or strange looking images.
Strange URLs. URLs have two parts: the hypertext (e.g., “Contact Us”) and the address (e.g., https://7nox.com/). Never trust the hypertext to tell you where the link goes. Always check the URL address. To do this, hover your cursor over the text without clicking and read the URL displayed in the bottom left corner of your browser. The URL should contain the company’s address. If it’s simply a long string or strange characters, it may be a pharming attack.
BAS Backups
Make sure your BMS provider backs up your BAS/BMS system on a regular basis. Backups keep your system secure against ransomware attacks, which rely on businesses not having copies of their data. Plus, system backups ensure redundancies when your system goes down or when you shut your building down for changes. If controller settings aren’t “persistent” they may not be saved during a reboot of your BMS. It’s critical that you have backups to ensure these changes are saved.
Conclusion
While building automation and connectivity brings many wonderful things to the built environment, they do require owners and managers to make their IT and OT more resilient. However, without proper training of staff, these technical efforts may prove fruitless. In cybersecurity, humans are often the weakest link. That’s why cybersecurity shouldn’t be simply a training box to tick at the end of the year. It should be an ongoing attitude and effort by all employees. Focus your training on seasoned staff, who may be laxer in their habits, and on newcomers who may have few habits at all.
After the former-company-known-as-Facebook rebranded itself in late 2021 to Meta, much of the world discovered the “metaverse”—the next generation of human connectivity that would fundamentally transform how we socialize and work.
According to Zuckerberg’s vision, the metaverse will be a place where social interactions are completely virtual, with self-created and customizable avatars interacting in ways that seem so real, we will easily take them as such. The new digital reality would affect work too, allowing workers to be at the “office” without leaving their home or changing out of their sweatpants. Remote workers no longer need to worry their physical office cohorts will race ahead, grabbing the next promotion or swanky project. Everyone would work in the same “space” regardless of their physical location.
The move to an immersive digital social life will certainly have massive implications for society, but building a new digital Agora for the modern world only scratches the surface of what the metaverse will be. That’s because its value extends beyond video games, social media, and the workplace. In fact, the sector to feel the most impact of these new virtual spaces will likely be today’s very real built environments.
Building Digital Twins
One key aspect of the metaverse for the built environment is the digital twin—a virtual doppelganger of a physical object or process. The notion of such a digital double is several decades old and the culmination of advances in 3D/BIM software, machine learning, and virtual technology. While architectural drawings have rendered 2D renderings of buildings for hundreds of years, 3D software added that extra dimension. Later, virtual reality would make the fourth dimension (time) possible. These advances set the stage for modeling physical processes like the human body or providing virtual walkthroughs of spaces like residential and commercial buildings.
However, digital twins serve a more important and practical purpose than visual mimicry; they attempt to model reality itself. To do this, digital twins must account for as many data points as possible. This includes every object, process and system that exists within a building—from the largest HVAC plant to the smallest occupancy sensor. All digital building systems function within a virtual world dynamically modeled to mimic the dimensions of time and space and natural forces. In short, the virtual world contains the same physical limitations as its physical counterpart.
The advantage of a digital twin, whether it be a building or an entire city, is that you can make changes and see what happens without doing it for real. This can be advantageous when time and costs are too great for real-life recreation or when impractical or impossible. Climate scientists, for example, use digital twins of the Earth’s weather systems to make predictions about the effects of global warming.
The more data points that make up your digital twin, the more accurate your simulations. In this way, data points function much like pixels that make up a screen, in that the more you can pack into a model, the higher the “resolution” and more life-like images you get.
However, such huge buckets of data take enormous amounts of computational power to process and manage. That’s where artificial intelligence and machine learning have helped give birth to the metaverse. Sophisticated algorithms do much of the “thinking” for us—locating patterns, making connections, running simulations and spitting out the results. Without them, modeling of systems is a rudimentary process, and it’s only relatively recent that we’ve been able to handle enough data to represent a virtual facsimile of complex physical processes and systems.
Helping Speed Up Building Decarbonization Adoption
As the metaverse takes its first steps, markets are already pricing in the tech’s potential to transform the built world. From a current global market size of $3.1 billion in 2020, experts project the digital twin market will reach $48.2 billion by 2026. Such growth is why some engineers, architects and entrepreneurs are looking to the metaverse and AI technology to help lower carbon emissions. In fact, an Ernst and Young study found that digital twins can reduce a building’s carbon emissions by half.
Founder and CEO of Cityzenith, Michael Jansen, oversees a digital twin platform that’s leading the push to decarbonize entire cities using metaverse technology. Recently Jansen hosted a live event laying out the current challenges to building decarbonization and how investing in digital twins can speed up green capital investments in the U.S. One pain point for property owners is retrofitting costs, which the CEO estimates at $4 to $7 USDs per square ft ($21 to $75 per sq m). “When you consider the fact that building owners spend about $2.10 per square foot on energy annually, it’s a large number,” Jansen states.
Another hurdle to building decarbonization adoption is the inherent conflict between the short-term gains investors demand vs the long-term investment that sustainable retrofitting requires. “The payback periods on typical [green] retrofits can be 10 to 15 years,” Jansen explains. “Those at the top of the investment pyramid typically look for returns within three to five years. As a result, a lot of these investments just don’t happen.”
While Jansen admits there are many challenges to green investment and adoption, he believes data is the obvious answer, at least for the short term. But buildings and cities contain thousands of software platforms, untold sensors, and BMS systems sending and receiving gigabytes of data through the air and over wires. It’s understandable that building managers can often feel as if they’re drowning in a sea of data and the digital tools that fill it.
Jansen claims it’s this “chaos of tools” that’s slowing building decarbonization efforts throughout the market. However, it’s understandable property owners would sidestep solving the issue of data glut, especially given the more immediate threats like higher construction costs, supply chain issues, swelling energy prices, and a shrinking demand for commercial office spaces.
Still, the Cityzenith CEO is correct in the assumption that funneling the increasing volume of data streams into a singular control is a desired outcome for most property and city managers. In fact, it’s this same consolidating impulse that’s motivating the move to integrated systems and open protocols within BMS technology today. Consolidation certainly increases data points, which is what digital twins need to be effective.
What’s needed is a “system of systems,” Jansen says. “The purpose of building a kind of metaverse around all of this…was to allow all these decarbonization processes to happen in one common place. So, all that activity could be studied and simulated before anybody actually spends a dollar. We use digital twins to predict energy consumption and financial outcomes to help drive down capital risk and increase adoption.”
Metaverse for Asset and Risk Management
While digital twins have numerous upsides for building decarbonization and efficiency, they can also help property owners and managers safeguard their investments. With aggregated data from building systems, equipment, and real-time sensors, digital twins can run physics-based models built on “what-if” scenarios.
Building and city managers can ask energy-related questions like “What if we bought 10% more solar and wind energy?” or “What if we generated more power on-site with roof-top solar array?”. After running such scenarios through a digitized property, owners would have a more accurate picture of the financial and operational impacts before committing. More importantly, they could easily tweak their input data until the outcomes fall within acceptable limits.
By using digital twins to accurately see future outcomes, property managers can also bolster their risk management. “What-if” statements can also apply to emergency situations like pandemics, natural disasters, and social upheaval. During COVID, many property owners scrambled to adjust to sudden lockdowns, indoor air quality demands, new hygiene mandates, and occupancy management challenges. Digital twin simulations of these variables could have better prepared owners and managers for the challenges while saving time, money, and possibly lives.
Sources:
“Cityzenith’s real world metaverse for decarbonization”. Published April 21, 2022, accessed April 28, 2022. https://youtu.be/l0L_7gwguoA
“Everything Facebook revealed about the Metaverse in 11 minutes”. CNET. Published October 29, 2021. Accessed April 26, 2022. https://youtu.be/gElfIo6uw4g
The social, environmental and technological challenges for the commercial real estate sector are significant. Many building owners and managers are still adjusting to the disruptions of the COVID pandemic, lock downs, remote working, mask mandates, rising energy costs and the move to hybrid work models. Few, if any, anticipated these events, nor the dramatic shifts they would kick start in building management and design.
On top of quickly developing social changes, there’s the long-term environmental impacts of global warming. Much of the planet is already feeling the implications of rising temperatures with increased flooding events, stronger storms, and eroding coast lines. All pose specific risks to property owners, since 10% of the world’s population lives in coastal areas that are less than 10 meters above sea level, according to an UN fact sheet.
Increased migration to cities and urban areas is spurring building development to a faster pace. The World Economic Forum estimates that two-thirds of the global population is expected to live in cities by 2050 and already an estimated 800 million people live in more than 570 coastal cities vulnerable to a sea-level rise of 0.5 meters by 2050. Technological advances pose yet another challenge to commercial real estate owners, as many feel the pressure by market competition and new government regulations to adopt energy and time saving building tech.
Given these social, environmental and technological challenges, it would seem change itself is becoming increasingly accelerated and unpredictable. Making things worse is the fact that we know less about the extent to which these factors affect each other. A warmer climate makes future pandemics more likely, which increases remote working, which reduces greenhouse gases. But higher temperatures also increase HVAC demand, which increases energy usage and greenhouse gases.
The entire system is connected, and each component poses a significant challenge in its own right; however, when combined, they will undoubtedly produce unforeseen outcomes that require quick course corrections at best, and entire paradigm shifts at worst.
While no one can predict the future, they can position themselves and their properties to better manage the unknown unknows. One way to stay flexible and adaptable is to adopt automated building controls built on open source protocols. Open building systems benefit from more technological flexibility, which can act as an important hedge against uncertainty.
Open System Protocols: A Short History
In the late 70’s early 80’s, large companies like Siemens, Johnson Controls and Honeywell took the first steps in connecting systems through electronic networks. Each brand developed proprietary “languages” or protocols that allowed building components like HVAC, lighting and alarms to “talk” to one another. While this created an efficient, dependable and integrated system, it also locked each property owner into the company’s proprietary hardware and software. And since connected systems were intended to last a decade or more, owners had little flexibility for innovation and change. In fact, it was the building systems provider that determined the speed and quality of that change.
Later in the mid to late 90’s, new organizations and companies like Tridium would introduce open protocols like the Niagara Framework, BACnet and LonWorks. These component languages didn’t limit owners to one brand by speaking one language. Instead, they could “interpret” between the other protocols, freeing owners to mix and match brands. Being “open” now meant property owners and managers could change the way they invested and used building technology.
Today, open protocols are a key play in helping evolve the next generation of automated building systems via IoT devices and smart building technology.
Open Systems and Adaptability
With open protocols, owners and managers can adapt quickly to market trends. With propriety systems, you’re locked into one manufacture’s software and hardware. Making upgrades or replacing components can be more costly than an open system. That’s because an open system is much like an open market. The more companies that compete for your business, the lower the price. Having the choice to shop around gives you budget flexibility to stay solvent sudden market fluctuations.
Quality is also affected. With open building systems, you can expand your search for new building systems and components outside a single contractor—who may or may not have the best quality available—and pick the best-of-breed tech. Component quality can vary based on priority, but open systems provide more flexibility for bigger investments. High quality investments are often long-term investments, so CAPEX projects also become easier to plan and deliver.
From a budgetary perspective, the best adaptability feature of open building systems is the ability to connect new devices to older systems. Open systems offer better ROI on legacy components. Building owners can realize their full technology investment by extending the life of older systems, while also adopting new solutions to keep them competitive.
Open source also makes it easier to customise your building systems. Non-proprietary protocols are valuable tools for developers and engineers to create bespoke solutions for the specific needs of their customers. Since connecting devices is easier, solutions are faster to develop, keeping you nimble and on-budget.
Building Brand
Many of today’s biggest brands extend beyond their name recognition and marketing to include their physical properties. From Amazon’s Biodomes to Apple’s Spaceship, today’s corporate facilities and HQs are as much a part of the corporate brand as the logos themselves. But future businesses need not be on the Fortune 500 list to feel the necessity of such architectural recognition. Trends are already moving there fast, as post-pandemic attitudes toward workplace safety, air-quality and hygiene become part of a business’s social contract with its workers and communities. The safety and security occupants feel about a facility speaks volumes about those who own and lease its spaces.
In a recent episode of DCTV, Mitchell Day of Distech expressed the idea that a building is essentially a fundamental representation of a brand’s core values:
“A building is no longer just where you work,” he states. “A building expresses to the public who you are as a company, how you want [the public] to see how you see your employees and your products and who you want to be to the rest of the world.”
Day’s statement not only reflects the growing importance of facilities in general, but it also signals a shift in attitudes towards buildings as a core part of corporate responsibility. Today, brands feel more pressure than ever to adopt sustainable manufacturing processes, low-carbon footprint buildings, alternative energy sources and social responsibility. How a building functions, its efficiency and connectivity are indicators of that responsibility.
Open building systems offer the flexibility to adapt to cultural expectations. As Day himself says: “Open systems provide the power to give people more choices on how they express their brand.”
The Future is Complexity
It’s often said that buildings are “living” things, formed from complex systems working together to produce a habitable and safe environment for occupants. It’s an apt analogy, yet “complexity” is relative. With every passing year, emerging technologies like system integration, IoT, machine learning, smart tech and next gen sensors are making the dream of true system unification a reality. Tech is evolving at such a rapid pace it’s likely in a decade or two, today’s buildings may be likened to single-celled organisms by comparison. The entire “carpentered-world” will seem much more fluid.
While there are downsides to complexity to be sure, one of the biggest upsides is adaptability. The more complex, the more tools you have, and the more nuanced your approach can be. Complexity and connectivity are what property owners, and their buildings, will need to adapt to the challenges of future pandemics, energy transitions and global warming. Open building systems help building owners and managers manage such complexity.
Touch screens are ubiquitous. We use them at the grocery store to check out, and at the airport to check in. They’re at visitor center kiosks, our banks, our homes and even in our cars. And today, because they’re the primary interface of smartphones, touch screens are literally in our faces for 4.2 hours every day. They are the “Black Mirror” that fans of the series will know as that part of device that reflects our image back towards us.
But despite their prevalence, few know how touch screens work. It’s not because they’re a “new” technology (they’ve been around for roughly six decades). Instead, it’s likely a failure of users to fully appreciate the ingenuity that goes into solving the unique problem of connecting humans and computers through touch. To that end, here’s a quick look on the four basic types of touch screens and how they function. But first, a little touch screen 101.
How do Touch Screens Work?
All touch screens work by creating a predictable X and Y grid pattern on the surface of the screen (Think back to the coordinate plane of your primary math class). As our fingers or stylus interacts with the grid, we introduce a disturbance. The disturbance might be a fluctuation in electrical resistance, capacitance, heat or even acoustical wave flow. The screen’s sensors then detect these changes and use them to triangulate our finger/stylus position. Finally, the sensors translate our clicks and gestures to the CPU, which executes the appropriate command (e.g., “open the app”). Simple in theory, but complex in practice.
Screen Tech Tradeoffs
Like any technology, touch screens have several cost-benefit factors, and manufacturers tailor their products to maximise specific benefits for different consumer needs. One common tradeoff for touch screens is accuracy vs cost. Typically, the more accurate the screen, the more expensive, due to the extra components or more expensive materials used. Screen clarity is another consideration. Some screen designs provide 100% screen illumination, while others adopt layered screens, which can dampen resolution and brightness. Other common screen characteristics include:
Durability vs cost
Single vs multi-touch (i.e., two or more fingers)
Finger touch vs stylus vs both
Resistance to contaminants like water and oil
Sensitivity to electromagnetic interference (EMI) or direct sunlight
High vs low power consumption
Consumers and businesses often trade less-needed features for more desirable ones. For example, facility access screens require more durability and “touch life,” with less consideration towards clarity and multi-touch, while smartphone makers need both (and more!) to compete.
Resistive Touch Screens
The most straightforward touch screen design is the resistive touch screens (RTS). These screens employ a multi-layered design, which includes glass covered by a thin plastic film. In between these two layers is a gap with two metallic electrodes, both resistive to electricity flow. The gap is filled with a layer of air or inert gas, and the electrodes are organized in vertical and horizontal grid lines. Essentially, resistive touch screens work like an electric switch. When the user presses the screen, the two metallic layers come into contact and completes the circuit. The device then senses the exact spot of contact on the screen.
RTS are low-cost and use little power. They’re also resistant to contaminants like water and oil, since droplets can’t “press” the screen. Almost any object can interact with the screen, so even thick gloved hands are usable. However, RTS usually offer low screen clarity and less damage/scratch resistance.
Capacitive Touch Screens
One screen type you’ll find on almost every smartphone is the capacitive touch screen (CTS). These screens have three layers: a glass substrate, a transparent electrode layer and a protective layer. Their screens produce and store a constant small electrical charge or capacitance. Once the user’s finger touches the screen, it absorbs the charge and lowers the screen capacitance. Sensors located at the four corners of the screen, detect the change and determine the resulting touch point.
Capacitive screen come in two types: surface and projected (P-Cap), with the latter being the common screen type for today’s smartphones and tablets. P-Cap screens also include a thin layer of glass on top of the protective film and allows for multi-touch and thin gloved use. So, they’re popular in health care settings where users wear latex gloves.
Having fewer layers, CTS offer high screen clarity, as well as better accuracy and scratch resistance. But their electrified designs put them at risk of interference from other EMI sources. Plus, their interaction is limited to fingers and/or specialised styluses.
Surface Acoustic Wave Touch Screens
Surface Acoustic Wave (SAW) touch screens use sound waves instead of electricity. SAWs have three components: transmitting transducers, transmitting receivers, and reflectors. Together, these components produce a constant surface of acoustic waves. When a finger touches the screen, it absorbs the sound waves, which, consequently, never make it to their intended receivers. The device’s computer then uses the missing information to calculate the location of touch.
SAWs have no traditional layers, so they tend to have the best image quality and illumination of any touch screen. They have superior scratch resistance, but are susceptible to water and sold contaminants, which can trigger false “touches.”
Infrared Touch Screen
Infrared (IR) touch screens are like SAW screens; in that they contain no metallic layers. However, instead of producing ultrasonic sounds, IRs use emitters and receivers to create a grid of invisible infrared light. Once a finger or other object disrupts the flow of light beams, the sensors can locate the exact touch point. Those coordinates are then sent to the CPU for processing the command.
IR screens have superior screen clarity and light transmission. Plus, they offer excellent scratch resistance and multi-touch controls. Downsides include high cost and possible interference from direct sunlight, pooled water, and built-up dust and grime.