Following the footsteps of the main frame (centralized), client-server model (decentralized), and now cloud (centralized), edge computing is transitioning the world’s businesses back to a decentralized computing model as data piles up faster than it can be sent back. However, with no edge standards yet in an emerging marketplace, edge computing provides an entirely new set of challenges with few design patterns and solutions available to solve them. Organizations are frustrated when they have to home grow, or chipmunk together high-risk solutions that allow them to capture and act on data in new complex environments when they are accustomed to low-risk AWS-class redundancy and elasticity. As a new generation of hardware and software enablement methodologies must grow to fill that vacuum, there is a shortage of proven edge technologies, vendors, or even scaling heuristics the C-suite can trust. 

The 10 Problems of Edge Computing white paper written with Nanometrics is a business case study on how one organization broke from legacy computing methodologies and refactored the data supply chain from the ground up for their customers. Aiming to solve the hardware-software interoperability challenges, both companies would need to develop next-generation hardware, software enablement platforms, and fleet management solutions. Given that edge computing is a blend of cloud runtime, elasticity, and inference coupled with RF connectivity, the following challenge emerged:  What does a software-defined edge look like? Ultimately, developing new ruggedized hardware that operates as a container ingest platform allows mission software to slipstream onto edge devices, bringing a slice of the cloud (minus the cloud costs and connectivity requirements) to the tactical edge as the fastest way to rapidly field new capabilities and analytics.

Whether you are the CTO/COO of your organization, an IT/OT professional, or a consultant leading a client’s edge transformation initiative, this white paper documents the 10 challenges of implementing edge computing and how they were solved. The boundaries of the cloud cease to exist, but now an entire new set of problems arise that can torpedo an organization’s edge transformation before it begins. Breaking down the problem domain and the terra incognita problem first, learn from the leading industry experts who have solutioned at the edge of the edge in extreme environments, developing tech stacks that support the future of edge computing.


Please enter your contact information


Key Takeaways

  • 70% of all enterprises will process data at the edge by 2023
  • Partnering in edge transformations helps offset risks and tooling costs
  • SWaP-C plays a critical role in edge hardware selection
  • Zero-touch is a critical capability for fielding at scale
  • Rugged portability is the new gold standard
  • DevOps standards help lead software provisioning and scaling challenges
  • Failover and redundancy have entirely new meanings in the edge computing landscape

Outline

  1. Executive Summary
  2. Company Profile
  3. Problem Statement
  4. Project Challenges 4a. Size, Weigh, Power & Cost  4b. Quality of the Data Supply Chain
  5. Solution Summary
  6. Hardware Overview
  7. Software Overview 7a. GNARBOX Software 7b. Nanometrics Software 7i. Data Campaign Management 7ii. Zero Touch Onboarding 7iii. Data Life Cycle
  8. Business Modeling
  9. Operational Architecture
  10. Conclusion
  11. Key Contacts

Meta Data Details

Authors: Devereaux Milburn

Filesize: 6.8MB

Filetype: .PDF

# of Pages: 19

Last Update Date: April 2021

10 Problems of Edge Computing 

Solutioning at the Edge Requires New IT Strategies & Form Factors

Executive Summary

Edge computing and safeguarding of large and critical datasets are generating an entirely new set of challenges that organizations face with few design patterns available to solve them. As organizations race to develop their tactical computing edge, they must create solutions that allow them to capture data in new and complex environments when they are either accustomed to data center-class redundancy and elasticity. More often than not, these organizations have spent a decade developing workflows and tooling solutions optimized for the cloud or discrete hardware that they must now pivot to working in an entirely new computing paradigm. IT/OT professionals must take into consideration device onboarding, connectivity, disaster recovery, power, telemetry, monitoring, synchronicity, data exfil, lifecycle, and maintenance when designing an edge computing solution. All of these factors become exponentially more challenging to solve when deployed to the edge, particularly in remote, extreme environments with low/no power or connectivity. 


This white paper details the solutions generated together by Nanometrics, the world’s most trusted provider of seismic monitoring solutions and GNARBOX, a thought leader and provider of ruggedized edge computing hardware and critical data backup workflow software. By focusing on the power and data workflow challenges, GNARBOX and Nanometrics forced new tactical approaches and decision spaces that incumbent vendors were unequipped to solve. The result is a ruggedized edge computing platform as part of the Nanometrics Pegasus ecosystem, capable of working across broadband and passive node deployments. The partnership was further expedited by GNARBOX’s cutting edge tech stack, which allowed Nanometrics to slipstream their Docker containers into an edge computing device easily.


The hardware solution pairs Nanometrics’ Pegasus digital recorder, a highly portable, low-power, mobile integrated data acquisition system, with the GNARBOX Edge Compute Platform. This pairing brings a slice of the cloud, minus the connectivity requirements and cost, to the tactical edge. Without the GNARBOX Edge Compute Platform, Nanometrics had previously relied on legacy methods such as removable media and laptops, which had increased cost, weight and logistics. Instead, GNARBOX allows for reduced power consumption, size and weight, thereby decreasing the required support gear footprint, while still performing critical computing tasks at the edge.


As a buffer to the cloud, the software solution leverages GNARBOX’s five year history in protecting data for both disaster recovery (DR) as well as automating data chain of custody workflows. Their platform is one of the first edge devices to embrace modern DevOps methodologies, participating in toolchains and tech stacks that support the nature of modern CD/CI software development. Nanometrics utilized GNARBOX’s API and backend system to install their own containerized software workloads on the GNARBOX Edge Compute Platform, which allowed them to deploy their own custom software packages during operations. The GNARBOX & Pegasus solution eliminates a single point of failure, and guarantees the quality of data transfer via hash validation, ultimately freeing data sets trapped at the edge and putting them into motion, then action, and ultimately outcomes. 


Nanometrics and GNARBOX unique origins and use cases have allowed them to pioneer advanced hardware and software methodologies to capture and process data at the edge. Other organizations and verticals can now avail themselves of these purpose-built solutions for the tactical edge, including environments that have historically been impossible to integrate at the edge with software enablement hardware that has been battle tested. 


Company Profile

With 160 employees and over 30 years experience, Nanometrics specializes in providing monitoring technologies and equipment for studying man-made and natural seismicity to customers. As the world’s most trusted provider of seismic monitoring solutions, they have developed the highest quality instrumentation and seismic edge data workflow tools installed on every continent across the globe. With pedigree founded in precision instrumentation, network technology, and software applications for seismological and environmental research, they have been supporting researchers from the world’s leading scientific institutions, universities, and corporations to conduct data-driven field experiments.


Problem Statement

Edge computing provides an entirely new set of challenges organizations now face with few design patterns available to solve them. How do I capture and act on data in new environments when I’m accustomed to Amazon’s data center-class redundancy, rich service offerings, and elastic scaling? As one’s organization has spent a decade tooling for the cloud, Agile, and DevOps software modalities, how can they be leveraged and do they even fit into an entirely new computing paradigm such as edge? Breaking down the problem domain further exposes ten compounding challenges Information Technology (IT) and Operational Technology (OT) decision makers will face in their edge computing transformations:


  1. Device Onboarding: How do I make device provisioning fool-proof and zero-touch across thousands of devices? How do the software engineering, then data science teams pass complex technical configuration parameters for data acquisition campaigns to a field technician then hardware, all while having little to no connectivity? 

  2. Connectivity: There are significant geographic challenges in having reliable field connectivity; infrastructure is not typically built or reliable where data needs to be captured. How is connectivity achieved in radio frequency (RF) degraded/denied environments?

  3. Disaster Recovery: By deploying to the edge I’ve now reverted back to having a single point of failure. How do I achieve business-class Disaster Recovery (DR) and Business Continuity Planning (BCP) strategies at the data acquisition site?

  4. Power: Limited power systems at data capture sites means that power must be strategically allocated. Sometimes power is generated onsite with renewables; how can I deploy power hungry devices to the edge with such restrictive power constraints?

  5. Telemetry: Given power constraints, the priority is data capture over data compression, encryption, backhaul, and telemetry. How can I manage to secure data and make it portable and actionable with these new constraints? 

  6. Monitoring: When some edge devices are off-grid with limited to no connectivity, application and hardware performance monitoring is not constant. How do I capture hardware and software performance logs and fold them into management tools for analysis?

  7. Synchronicity: Some use cases require the synchronicity of all the data capture stations in the array for full analytics. How do I achieve synchronization of asynchronous devices unaware of each other?

  8. Data Exfil: Data backhaul is expensive and even sneakernet options such as laptops are power hungry, heavy, and cumbersome to bring to the field just to extract data. They often do not have the ruggedness requirements or battery capacity to fulfill the mission; how do I exfil my data sets and scale that solution?

  9. Lifecycle: Edge devices are often wed to the 10-20 year hardware life-cycles of the technical/industrial equipment in the field, however IT refresh lifecycles are 3-5 years. How can I pick an edge platform that blends best with the tooling and acquisition dynamics of my industry? 

  10. Maintenance: Sending a field technician to a remote location to physically touch each device in the fleet, or to extract data, is expensive and time consuming. How do I streamline edge maintenance, software updates, and reduce time to capture application data?

Project Challenges

Specializing in collecting and analyzing critical real-time data for global, regional, and local networks, the Nanometrics team aimed to reinvent and modernize everything about how data is strategized, captured, and managed at the edge. Focusing on power and data workflow challenges forced new tactical approaches and decision spaces incumbent vendors were not equipped to solve. 

1. Size, Weigh, Power & Cost (SWaP-C)

Whether you are installing an edge device inside machinery, in a plane, or on a mountain, Size, Weight, Power, and Cost (SWaP-C) are key factors. An 80 pound rack-mounted server is not feasible to carry 100 miles into the mission, bootstrap to an all-terrain vehicle, or retroactively add to an aerospace platform. Putting it in a Pelican case only adds to the weight and size, compounding the problem. The longer the terrestrial trek to install, the heavier it becomes. Likewise vertically, as airspace rises from drone, to plane, to rocket, to satellite - the higher the system the lighter (and yet more rugged) the computing footprint must become. Thus is born the saying “ounces are pounds”. Weight means logistical cost: the cost in fuel, cost in volume, cost to repair, and the cost on other impacted systems. These factors are critical to consider when developing a new production edge system; more factors equate to increased complexity. 


Developing new edge computing capabilities presents design decisions involving making hardware lighter, smaller, and less power hungry with ultra-low wattage devices. In Nanometrics' case, everything had to be packed in and carried up the mountain where there is minimal to zero field infrastructure. The power budget is incredibly tight, field sensors and systems cannot afford to power external devices when they join, and being able to connect in and pull data out with no power implications is critical. 


According to a 2020 IDC report, up to 70% of all enterprises will process data at the edge within three years. With the continued proliferation of edge computing, there is an ever-increasing pressure to find new platforms that require low size, low weight, low power, and low cost; rugged portability is the new gold standard. SWaP-C friendly edge solutions are in high demand because so little is out there in the market. As a result, Nanometrics and GNARBOX both set out to develop hardened computing appliances at extremely small power scales with the ability to do a full edge data station and supporting data recovery model. Aiming to solve the hardware-software interoperability challenges for the edge of the edge, both companies would need to develop next-generation hardware, software enablement platforms, and fleet management solutions. If able to solve these problems in extreme environments first, it would then allow for the tech transfer process to begin so other industries and verticals could then adopt proven edge compute systems for their use cases using proven implementation design patterns. 

2. Quality of the Data Supply Chain

How will we backup and aggregate that data at the edge and in extreme environments? Removable media previously used has been identified as a common point of failure. Even if an SD card is inserted properly and is in the right format, there have been instances where technicians would accidently put the same card back in and return home from the field with an empty card, losing all the data. It is critical to replace tiny SD cards with something different that’s not removable, corruptible, or easy to misplace. Using insertable media creates a weak link in the data chain if they get lost, stolen, or corrupted along the way.


How do we eliminate portable flash media like SD cards and laptops in extreme environments which are not ruggedized and are highly susceptible to dust and moisture; how do we make everything fool-proof? Operator error continues to be a serious challenge; how does the “toucher” in the field know if the computing equipment is provisioned correctly, working, and backed up? When the software designer, data manager, and installer are never in the same place or network, the odds of things not working go up exponentially. What does a rapid deployment kit look like and how do I make it seamless, even fun to provision an edge data campaign? How does one achieve confidence it's been installed and configured properly?


Between hardware and provisioning, the cost per Gigabyte goes up exponentially in edge as does the risk quotient. Failover and redundancy have entirely new meanings in the edge computing landscape. The saying “one is none, and two is one” is particularly true when the computing fabric is outside the safe confines of a secure environmentally controlled data center. You’re not just dealing anymore with a technician who accidentally reformats the device wiping the drive. The device may get stolen, disrupted by wildlife, or in Nanometrics use case, it could literally fall into the volcano it is taking seismic readings from. With these dynamics, how does one create immutable data and secure the consistency of data acquisition? 


Lastly, the edge does not have the storage elasticity that the cloud does; hardware has fixed capacity. When an edge device becomes full it can stop recording data and if programmed to do so, automatically start to erase and recycle old data to free up space. If sharing the same disk partition, the operating system performance can become seriously compromised as well when a disk becomes full. Capacity planning for edge use cases is essential, as well as planning data retention and release strategies. Nanometrics solved this problem by ensuring there is 2-4 times the needed storage capacity for the mission window, but retrieving that data presented challenges. Accessing a region might take multiple days of travel, and the last mile is a 6 hour drive, then a long hike. Once there, it’s only 10-15 seconds of work to download the data. How do I reduce my data access costs? As time is money, how do we enhance and modernize the user experience, increase the quality of the data supply chain, and reduce the time an operator has to be onsite?

Solution Summary 

Nanometrics recognized early on in the edge solutioning life cycle that it would be necessary to develop lightweight mechanisms to modernize every aspect of the edge computing pipeline and supply chain. From the desktop, to cloud, to mobile, to edge, then across developer to data manager to field technician, every link in the chain was reforged and harmonized together. The result: a foundationally new ecosystem for portable sensors, ruggedized edge computing appliances, partner hardware, and mobile data governance applications capable of working across broadband and passive node deployments.

Hardware Overview

Starting with a purpose-built edge appliance, the Pegasus digital data recorder is a highly portable, low-power, and mobile integrated data acquisition system. In response to demanding SWaP-C requirements, Nanometrics reduced the station size and weight which allowed the Pegasus platform to achieve scalability. With the increased number of edge compute stations that could be carried into the field, one can deploy more stations for a longer period of time with less investment. Providing high fidelity data acquisition tailored to the needs of portable monitoring campaigns, the power consumption of <200mW represented a reduction of 60% for a typical sensor and digitizer station. Additionally, engineered modularity opened up broad choices in battery chemistry and sensor technologies that could be employed, thus facilitating transport logistics and matching station designs to the needs of the data science initiative and deployment region. 

 

Supporting the Pegasus ecosystem at 0.8lb (375g), the GNARBOX Edge Compute Platform brought a slice of the cloud (minus the cloud costs and connectivity requirements) to the tactical edge, thus further augmenting the Nanometrics suite of edge hardware capabilities. Providing the exact same Docker container-runtime functionality as the cloud, GNARBOX offered a Terabyte of “cloud in pocket” in an ultra-portable format. As a container ingest platform, it allowed any containerized mission software to slipstream onto the device and replace other legacy methods of data harvesting. By removing reliance on removable media entirely, the GNARBOX hardware eliminated lots of problems ranging from backups being lost or damaged in the harvesting process, to the manual process of labeling and organizing the data from each location.


GNARBOX’s precision lightweight blend of Just Enough Operating System (JeOS) and Just Enough Hardware hit the sweet spot to bring rapid new capabilities to the edge. “It's enough of a computer that it can do critical pieces I need which means I don't need to bring a computer to the field,” said one Nanometrics engineer. With a price point up to 75% less than that of a ruggedized laptop, it represented a capabilities boost in contrast to laptops that had a decrease in convenience and durability for edge harvesting activities. A GNARBOX data harvest became faster than laptops with one month of data seamlessly downloaded and on the fly processing in under 10 seconds. It is not just speed that matters; data is immediately ready to use, delivered in industry standard miniSEED format along with StationXML metadata.


Ultimately, the exceptionally reduced power consumption of both Pegasus and GNARBOX systems significantly reduced power requirements, station size, and weight for the total edge footprint, resulting in more stations for longer periods of mission time. As an added value proposition, when hardwired in, GNARBOX’s bi-directional power manager brings supplemental power to the data recorder while transferring data, thus augmenting the power profile with GNARBOX’s hot-swap uninterruptible power supply (UPS) battery. 



Using geophones to monitor rockfalls and gain an understanding of the water flow mechanisms behind the rock face. Prof Bernard Giroux is shown holding the GNARBOX connected to a Pegasus Digitizer. This program is a collaboration effort between Prof Francis Gauthier from the Laboratoire de géomorphologie et de gestion des risques en montagne (LGGRM) at the Université du Québec à Rimouski (UQAR) and Prof Giroux from the Institut national de la recherche scientifique (INRS).


Software Overview

Introduction

Amazon’s AWS cloud employs the concept of software-defined infrastructure: solving scalability and service challenges through hardware abstraction. Notably, one of 5G’s value propositions is also software-defined networking where network functionality is managed through software rather than hardware to achieve network slicing. Given that edge computing is a blend of cloud runtime, elasticity, and inference coupled with RF connectivity, what does a software-defined edge look like? To deploy to the edge requires a new class of enablement software as all these applications and services cannot be handled without distributed software management pipelines and fallback mechanisms. The following sections break down how each company developed software-defined mechanisms to create Edge-as-a-Service (EaaS) capabilities for downstream customers. Overall, the edge ecosystem and its elements must be software-defined in order to provide agile, flexible, and scalable services which can be controlled from a few centralized locations in the network in addition to off network.

GNARBOX 

In addition to being an original equipment manufacturer (OEM) producing small edge servers, GNARBOX has multiple DevOps teams designing software for its edge computing platform. As a buffer to the cloud, GNARBOX’s five year history in protecting data made it a natural fit to solve both the disaster recovery (DR) backup requirements as well as automating the data chain of custody workflows. It is one of the first edge devices to embrace modern DevOps methodologies, participating in toolchains and tech stacks that support the nature of modern software development. Running software in containerized Docker workloads is not only lighter and faster, but allows for Continuous Development/Continuous Delivery (CD/CI) microservices which is an Agile DevOps methodology essential to delivering, updating, and securing software to the edge quickly.


When Nanometrics researched how they could facilitate loading third party containers onto MIL-STD-810 ruggedized computing hardware, technical teams on both sides partnered to expand the range of edge capabilities the GNARBOX platform could achieve. Since the tech stack and platform was already proven within the GNARBOX user community for some years, it was an excellent opportunity to now open up the platform for other companies to  deploy their mission software on it. GNARBOX built an API and backend system that allowed Nanometrics to install their own containerized software workloads onto the GNARBOX Edge Compute Platform. This enabled their software team to utilize the waterproof buttons, write messages to the OLED screen, generate menus and actions for entries in those menus, and run their own custom code during backups. Additionally, through enhancements made to the GNARBOX operating system, software could connect to the embedded system through the mobile application for field technicians. 


The Pegasus station knows the last time it was harvested and provides that data to GNARBOX’s edge device. When plugged in, the field technician can use the GNARBOX to select between repeat backup, full backup, or differential (i.e. all new data since the last successful backup). Even with the data being stored in a proprietary format, the code on the GNARBOX can understand it and process it. Data is only copied off the Pegasus eMMC flash memory, never removed, ensuring there are working duplicates, which eliminates the single point of failure and reduces risk. The software guarantees a full set of data is captured every time with hash validation functions, ensuring the quality of the data transfer, ultimately freeing data sets trapped at the edge and putting them into motion, then action, and ultimately outcomes. 


Nanometrics 

Data Campaign Management 

Aiming to reinvent every aspect of the data acquisition and supply chain, Nanometrics set out to solve some of the toughest software problems. The problem with edge computing is that one has often lost the safety and consistency of a brick and mortar office building or data center. As an engineer it’s very challenging to design a solution where the end users/stakeholders often never meet and are in completely different careers, but need to be perfectly coordinated across regions and networks so complex steps of the dance automatically occur with no IT helpdesk. Thus, ease of use was a key factor. Well-designed, friendly and intuitive workflows for all scenarios were critical to address to ensure the most inexperienced operator could deploy the solution with confidence.


Nanometrics developed an ecosystem of software solutions replete with simplified chain of data custody solutions for the fleet named Pegasus. They effectively optimized autonomous data capture by developing a seamless end-to-end workflow to deliver automatically constructed, ready-to-analyze data. Starting this process is the software-as-a-service (SaaS) based Campaign Manager. Allowing data owners to become campaign managers, they could now manage every aspect of a data acquisition campaign with a web-based campaign manager. One can develop detailed templates on the specifics of their data acquisition strategy, then easily apply those templates selectively, or en masse across the data capture stations, achieving fleet management. 


Zero Touch Onboarding

With complex data governance policies, highly technical and scientific equipment settings, new software packages and configuration parameters specific to each campaign, it is incredibly challenging for a campaign manager to field edge computing hardware. How do I get extremely technical computing details to someone in another country, often speaking a different language, with no internet or mobile connectivity in the field, and sometimes hanging from a rope over a cliff? 


Recognizing that the most ubiquitous device in any workplace is the mobile phone, the Nanometrics team leveraged a mobile application to be the provisioning force to solve the wireless onboarding process in the field and achieve zero-touch provisioning. The web-based campaign manager pushes all the provisioning settings and install configurations to the mobile device of the “toucher” technician before they go to work at the edge. Automatically distributed to every team member’s mobile device, it ensures the on boarding profile is secure, verified, and in pocket while still in the network before the mission begins. When at the data edge, the mobile application automatically connects to the edge compute hardware/sensor platform and pushes the updates and configuration wirelessly. The app also provides immediate insight into data station’s state of health, change logs of adjustments made in the field, and live waveform data being generated, which are synched back to the cloud as a complete record of the deployment and updated against the master campaign plan. 


Data Life Cycle

The ability to effectively manage the steady flow of data from field sensors is inherent to the success of any seismic, or data acquisition network for that matter. Some data types have an expiration date, therefore it’s critical to manage the edge effectively with workflow tools to ensure acquired data can be accelerated to be acted upon. Even if remote sites utilize a satellite communication system, there is extremely limited bandwidth and expensive data backhaul costs. With such limited bandwidth, how does one manage large-scale portable data acquisition and monitoring campaigns?


The Pegasus software ecosystem was designed to solve the ten edge computing challenges, providing cloud-based analytics with seamless end-to-end workflows to plan, deploy, and harvest sensor data. Dealing with instrument time series data, system response data, experiment metadata, and instrument metadata, it provides the ability to analyze and archive straight from the field, thus unifying the collection, transmission, and analyzation of seismic data in seconds. 

Business Modeling

Paramount to any successful technology strategy is its alignment to the business strategy. How will spending money on this initiative make the company money? How does our organization rapidly obtain new capabilities and offset risk at the same time? Part of the modularity of the edge computing space is the ability to uniquely curate each aspect of the ecosystem; the boundaries and limits of the cloud cease to exist. However, there are also no edge standards yet in an emerging marketplace - it is the wild west with a shortage of proven edge appliances or OEMs the C-suite can trust. 


Modern acquisition strategies embrace the idea to never build what you can buy, never buy what you can lease, and where appropriate, move from CAPEx to OPEx, all of  which were made popular by the cloud. Nanometrics navigated this business challenge by offsetting the R&D, tooling, and production costs to implement the data-exfil portion of the project by partnering and leveraging the GNARBOX edge platform, ensuring they had a whole product offering faster to market. Ultimately, partnering with a hardware OEM whose business model encourages customizing their edge platform proved to be an effective business strategy, allowing Nanometrics to leverage the tooling and IP investments already made in the space. 


Specializing in making ruggedized computing hardware and critical data backup software for over five years, GNARBOX was identified as a thought leader when the Nanometrics team was unable to find suitable solutions in the marketplace. The existing ruggedized MIL-STD-810 hardware providing ingress protection against moisture and dust, broad environmental temperature ratings, human factors engineering, as well as advanced thermal management capabilities made the edge computing platform a natural fit. Beyond the cool factor of having an entire wireless server running in your pocket, the tech stack was ahead of the industry and allowed the Nanometrics software team to slipstream in their Docker containers into an edge computing platform that had already solved those challenges. GNARBOX did custom software development to enable their development, supported their development with engineering and supported the product rollout with our support team. 

Operational Architecture

Conclusion

Looking at the edge is easy, getting to the edge is hard. As every experienced technology professional knows, adoption is not just about the technology, it’s about aligning resources and stakeholders in a new methodology effectively to create organizational outcomes. Preemptively understanding the challenges and picking initiative partners familiar to working in the solution space can ensure a company’s edge initiative isn’t driven by legacy cloud or telco methodologies, but extol the virtues and next-gen computing principles of being able to capture and act on data at the edge without the backhaul costs. 


As a new paradigm of computing following the footsteps of the main frame (centralized), client-server model (decentralized), and now cloud (centralized), the edge is moving back to a decentralized computing model — thus a new generation of hardware and adoption methodologies must grow to fill that vacuum. Both Nanometrics’ and GNARBOX’s unique origins and use cases forced each company to find new approaches to developing advanced hardware and then reinventing how to pump mission software into extreme environments. Thankfully, these extreme use cases are helping to pioneer and solve the size, cost, and hardening challenges because they have to. Other verticals and use cases can now avail themselves of purpose-built solutions coming from the edge of the edge. As the SWaP-C and zero touch challenges are getting solved, an IT/OT professional can add edge computing in new spaces like inside a pipe, bootstrapped to industrial machinery, or mounted underneath an autonomous vehicle for years at a time. Hardening the device to work for extremes brings peace of mind that it has passed sea trials and is qualified to operate in other more environmentally-friendly edge use cases like AI/ML inference on the assembly line, computational vision at a manufacturer, connected seaport logistics, smart mining, digital signage, interactive kiosks, medical devices, and health care service robots, all in a lifecycle scale that reduces cost by starting with purpose-built edge enablement platforms.




Key Contacts

Author: 

Devereaux Milburn

Director of DevSecOps 

devereaux@gnarbox.com

Subject Matter Expert: 

Sylvain Pigeon 

Senior Product Manager

sylvainpigeon@nanometrics.ca

Sales: 

Jay Peterson 

Director of Sales 

jay@gnarbox.com

Sales: 

Andrew Moores

Director of Sales

Seismology Business Unit 

andrewmoores@nanometrics.ca