skip to Main Content
2.5 quintillion Bytes of data are produced EVERY DAY! Or, 3.3 billion CDs worth of information.

The amount of data generated is growing exponentially. Best estimates suggest that at least 2.5 quintillion bytes of data is produced every day. It would take 26 years to move an exabyte (1 quintillion bytes) to the cloud on today’s fastest fibre networks. Some companies have resorted to moving data by commercial transport. Eg. 100 petabytes loaded on to 40’ specialized trucks would take 6 months to move.

“Over the last two years alone 90% of the worlds recorded data was generated”

This pace is only accelerating with the growth of Edge Computing, the Internet of Things (IoT), Smart Cities, Industry 4.0, Artificial intelligence (AI), Crypto, Blockchain, and Virtual Reality.

Where does all this data come from?

This is what happens every minute…

  • More than 3.7 billion humans use the internet (that’s a growth rate of 7.5 percent over 2016).
  • More than 120 professionals join LinkedIn
  • Users watch 4,146,600 YouTube videos
  • We send 16 million text messages
  • 156 million emails are sent; there will be 9 billion email users by 2019
  • People will take 1.2 trillion photos by the end of 2017
  • There will be 4.7 trillion photos stored
  • Approximately 2 billion devices in 2006 to an anticipated 200+ billion by 2020.
  • One of the primary drivers for our edge compute and data center growth.
  • The Weather Channel receives 18,055,556 forecast requests
  • There are 600 new page edits to Wikipedia

Why edge computing?

Edge computing speeds up the flow of data, including real-time data processing without a reduction in latency. It allows smart applications and devices to respond to data almost instantaneously, as its being created, eliminating lag time. This is critical for industry 4.0 technologies such as IIoT devices, UAV, Smart factories, mills and even self-driving cars.

Edge computing allows for efficient data processing in that large amounts of data can be processed near the source, reducing Internet bandwidth usage. This both significantly reduces costs and ensures that applications can be used effectively in remote locations. Additionally, processing data without ever sending it to a public cloud adds a layer of critical security for industries with sensitive data.

Practical Applications

Here is one example of how just one sensor saved tens of thousands of dollars. Implementing that data capture across thousands of sensors, applying machine learning or artificial intelligence and you now have actionable intelligence to make cost saving and proactive decisions.

“Mill stoppages and shut downs are costly, estimates are between $300-$900 per minute in lost production time. This sawmill experienced line shut-downs when the pressure dropped below ~70 PSI. Many times per month, the lumber processing line was halted to unidentified issues in their pneumatic air system.

By installing wireless PSI monitors in key locations throughout the pressurized system. As pressure approached the 70 PSI range an SMS alert was automatically sent to the on-site operations manager. The manager was then able to review the online analytics and ID which part of the system was experiencing problematic drops in pressure. This allowed the operators to modify their processes and make changes to the system to help prevent shutdowns and provide valuable data both before and after maintenance had taken place.” Aretas Sensor Networks.

Industrial sensors exist for just about everything. Laser scanners checking lumber for defects to vibration sensors in primary movers that can proactively indicate when a bearing is about to go or a motor is getting too hot. Even a temperature increase of 0.5c could indicate a change in the motor that can, with the incorporation of real time data monitoring, can result in longer up-time and even extending the live of valuable equipment it is monitoring.

With an increase in data comes the requirement to store and process that data quickly. Sending the data from thousands of sensors to the cloud where it then has to be called back for review is time consuming and costly. Edge Compute allows the core data to remain on or close to site where it is needed and then allows for processed data and reports to be sent to the cloud for global access.

What is next?

As edge computing creates demand for different types of facilities in different places, the design and capacity of local infrastructure will be guided by the workloads. As the volume of data grows, and that data moves across the network, the growth of edge computing creates a ripple effect that will generate business, extending hundreds of miles from the edge facilities.

PodTech’s in house design, build and operations team, along with its state of the art Data Pod factory is well positioned to meet this growing edge compute need. Contact us today to discuss how we can work together towards a solution that works for your edge compute needs.

Our Partners
Back To Top