The Amara’s Law and the  Anatomy of Business Use Cases in IoT

Last week we had an interesting debate on what use cases we need to work on and across which industries . In the animated discussion, someone asked a quiz question – “What is Amara’s Law?”

It turned out that American Scientist Roy Amara came up with an interesting view and an easy to understand law –“While we overestimate the short term effect of technology, we underestimate the long term impact “. I feel in the world of IoT this law is fascinatingly relevant .

The hype surrounding IoT puts pressure on managers to monetise the investments being made in IoT in organization. The buzz descends on  coming up with business use cases which could be winners by honing their effectiveness on an  IoT Platform build or bought. The linkage between the identified use case and their deployment on the chosen IoT platform is a significant one to shorten the time to market.

For the sake of completeness, a Business Use Case is essentially a business process  flow or sequence of steps of a business activity which impacts the organization within and the exco-system it works in . Most of the time the Business Use Case is the precursor to build an IT solution  ( an IoT solution in this post !!)

Why are IoT based solutions are distinctive and diverse?

IoT solution design is quite different from typical IT solutions in that it bridges ( through communication media , internet ) the Physical Computing (also termed as Operations Technology (OT) with sensors, actuators and communication devices, and the Digital Computing  with data, analytics, workflows, and applications( also terms as Information Technology ( IT) . The diversity of use cases and operational requirements creates an array of IoT Endpoints, communication protocols, data management, and analytics technologies, as well as corresponding deployment topologies.

The real value which can be derived from a select Business Use Case in IoT comes from turning data into insight, and making it actionable to drive smarter operations.

Despite the diversity, there is a level of commonality across use cases that can illustrate the anatomy of IoT solutions.The use cases essentially fall into four broad areas of Monitoring , Control , Automation and  Analytics leading to predictive management . Taking a layered approach in describing the anatomy helps identify relevant services and technologies from the things-level all the way up to IoT apps.

How do we decide on the appropriate use case?

I have attempted to bring out a few key parameters to be assessed as we decide on this. This will help us go beyond the IoT hyperbole, but when technology pundits claim that the Internet of Things (IoT) can change everything and help us to sift out the frivolous gimmicks like Wi-Fi enabled toothbrushes from the consideration sets of the Use Cases . So let us discuss the following :

  1. What makes industries  IoT Use Case Friendly ?
  2. What type of Data Availability would enrich an Use Case to be pursued ?
  3. What is the expected Business impact and Return of investment?

What makes  Industries  IoT User Case Friendly ?

Industries which are  inherently Sensor Driven –This  not hard to guess .These would be   the first ones to be IoT user case friendly . Inputs from Intel and Cloudera could be referred to which document that Automotive , Energy (Utilities) ,Healthcare ,  Manufacturing , Retail , Buildings , Homes  and Transportation are the leaders in making IoT work in core business processes  . Given that the key capability of Machine to Machine communication (M2M ) already exists as something native to these industries and M2M communications rely on sensors within the device itself, and the networks that connect devices together makes M2M IoT enabled .

So what are the major omissions? Media, Finance are the two major industries. I would rate Homes and Healthcare to be a “median” case .

Industries which are  less sensitive to Data security and Privacy-  This is a major challenge that is not set to go away any time soon. Network security is a huge bone of contention when it comes to data security and data protection. For example, one of the advantages of the IoT for healthcare is the ability to collect medical data in real-time from devices such as wearables, to monitor patients’ health – particularly those with ongoing conditions such as diabetes or hypertension. This medical data is therefore extremely sensitive, security needs to be at a level where external threats are not able to access and steal data records within the network It can be said about the Finance industry as well on the similar lines. It is not a surprise that Financial sector will be a late adopter of IoT based initiatives. ( reports from Deloitte )

What type of Data Availability would enrich a Use Case to be pursued ?

The promise of IoT is derived from the potential access to data from a variety of sources . These would be massive volumes of data of intermittent data stream generated from a variety of sources and the data is  predominantly tume series .Data could be in real time streams or in batches , diverse in data structures and schemas . Culling out the signals from the noise of data is something which makes IoT program an effective one .

Hence  it is imperative that the marriage of physical computing and digital computing should be a near perfect one . To gain the benefits from the IoT Program the need to make use of the usage data ( from users )  , telemetry data ( from sensors and remote end points )  , contextual data ( from enterprise applications like ERP , CRM ) and ambient data ( weather , traffic etc ) to work on concert to provide the insight to act upon . Hence the selection of a right business use case is imperative for the success of the IoT program and if the use case can work on the standard layers of a typical IoT platform that would be an ideal situation to be in.

ToT programs are expected to essentially help address business functions which are in the realm of Monitoring , Control , Automation and last but not the least in bringing in predictability in actions through analytics . If we believe that the real value of ioT would be generated though predictive analytics and with it the attendant benefits , then the use case should be decided based on the following five questions which we need to pose ourselves with respect to data :

  1. What kind of data is needed ? The question which we need to ask something specific like: “I want to know whether the device would fail  will fail in the next X days if the temperature remains at Y degree centigrade .
  2. What are the measures we care and what data can provide that ? If we want to predict things such as failure at the component level, then we have to have component-level information. If you want to predict a door failure within a vehicle , we  need door-level sensors and  and the data aggregated from them . It’s essential to measure the data that we care about.
  3. Is the data accurate ? It’s very common in predictive maintenance that we want to predict a failure occurring, but what we may be actually predicting through the data may not a real failure. For example, it may be predicting fault. If we have faults in the dataset, those might sometimes be failures, but sometimes not. So we  have to think carefully about what we are modelling, and make sure that that is what we want to model.
  4. Is the data connected enough? : If we have significant usage information—say maintenance logs— but we do not  have other identifiers that can connect those different datasets together say from sensors, context ( from enterprise apps )  and ambience ,  then we are not doing justice to the analysis .
  5. Do we have enough data ? In predictive maintenance in particular, if we are modeling device   failure, we  must have enough examples of those device  failing, and the context and circumstances they are failing in .

What is the expected Business impact and Return of Investment?

It is a general expectation that once we embark on an IoT project IoT based insights  is expected to provide new view of  functionalities , revised capabilities and feature based differentiations .These could result in creating the right business impacts. Certain use cases could be more compelling  with a higher financial payback than others .Use cases focussed on Fuel , Energy and Labor savings have shorter payback periods and provide significant financial paybacks .

So as use cases are decided  the benefits could be defined to fall under :

  1. Internal Benefits – which help the internal organization operations . These use cases would focus on Safety and Security , Asset Optimization , Resource Conservation and expenses reduction .
  2. External Benefits – which contribute to the ecosystem in which the extended organisation works in .The Use cases would be providing improvement in well being, enhancing customer service and engagement , identifying new revenue streams .

From our experience , as IoT based project implementation would have to go through a life cycle , and its success would breed bigger success , it would pay to look at internal benefits in the early days and then branch out to the external benefits . The number of variables to control is lesser in the former and the chances of success are brighter .

Concluding Notes:

So to get the necessary short term effect to justify the investment in IoT and to keep an eye on the long term impact in the organization the right selection of Business Use  Case could be a critical one .Amara law has its notable influence on the Anatomy of the Business Use case in IoT !!

 

 

 

 

 

Who owns the Machine Generated Data in IoT – Men or Machine?

The other day we were discussing and debating on a solution to be designed to meet the sensing needs for access, temperature, and humidity for some devices which form part of a networking infrastructure ecosystem. The idea was to build an IoT based system for monitoring and control.

The design discussions veered around the ability to collect data from the sensors and the types of short range communication protocols which could be deployed.Questions and clarifications were raised if we were compliant to use short range communication protocols in sensitive areas as customer Data Centres which are like owned and that they may be custodians of data of their end customers.

The hidden perils of data acquisition and data ownership reared its head which needed to be addressed as we moved forward.

The data which is acquired by sensors is essentially Machine Generated Data (MGD).This post will  dwell on the subject of data ownership of MGD as follows :

  1. Sensors ( Data Acquisition and Communication )
  2. Machine Generated Data
  3. The Lifecycle of the MGD and the Ownership Paradigm
  4. Who should be the owner of the MGD?

Sensors (Data Acquisition and Communication):

In the IoT ecosystem, the physical computing frontier is managed by the Sensors .Sensors essentially include three fundamental functions:

  • The act of sensing and acquiring the data
  • Communication of the data through appropriate protocols to communicate their readings to internet cloud services for further aggregation and trend analysis
  • The activity is energized by power supply,

The additional functions would include processing/system management and user interface.

The Digital Computing part comprises the IoT application. This is determined by the types of sensors, cloud connectivity, power sources, and (optionally) user interface used in an IoT sensor device.

When making physical measurements such as temperature, strain, or pressure, we need a sensor to convert the physical properties into an electrical signal, usually voltage. Then, the signal must be converted to the proper amplitude and filtered for noise before being digitized, displayed, stored, or used to make a decision. Data-acquisition systems use ADCs (analog-to-digital converters) to digitize the signals with adequate signal conditioning.

Sensor data communication to the cloud can be done in multiple ways from wireline to wireless communication of various complexities. While wire line communication has some important benefits (such as reliability, privacy, and power delivery over the same wires), wireless communication is the technology that is the key catalyst in the majority of IoT applications that were not previously practical with wired systems. Reliability, channel security, long range, low power consumption, ease of use, and low cost are now reaching new levels, previously thought infeasible

Some examples of recently popular IoT wireless communication types: Wi-Fi, Bluetooth Low Energy (aka Smart), Zigbee (and other mesh 802.15.4 variants), cellular, LPWA (Low-Power, Wide-Area network variants: Ingenu, LoRaWAN, Sigfox, NB-LTE, Weightless), and Iridium satellite.

  1. Machine Generated Data (MGD)  :

Sensor data is the integral component of the increasing reality of the Internet of Things (IoT) environment. With IpV6, anything can be outfitted with a unique IP address with the capacity to transfer data over a network. Sensor data is essentially Machine Generated Data. MGD is that is produced entirely by devices/machines though an event or observation.

Here we would define human-generated data, what is recorded is the direct result of human choices. Examples are buying on the web, making an inquiry, filling in a form, making payments with corresponding updates on the database. We would not consider the ownership of this data in the post and would be limiting our post to MGD.

  1. The journey of the MCD and the Ownership Paradigm:

The different phases exist in the typical  journey of Machine Generated Data .

Capture and Acquisition of Data– This is a machine or a device based function through signal reception.

Processing and Synthesis of the Data – This is a function which ensures enrichment and integration of Data

Publication of the Data – This is done by expert systems and analysts who work on exception management , triggers and trends .

Usage of Data – The action which need to be taken on the processed and reported information is used by the end user .

Archival and Purging of Data – This function is essentially done by the data maintenance team with supervision.

Now let us dwell on the Ownership Paradigms.They range from the origination of data , adding value to the data through make over , monetising of data through insights generated. Interestingly, let us explore if there is any conclusive method for determining how ownership should be assigned. A number of players may be involved in the journey of the data (e.g. the user, hardware manufacturer, application developer, provider of database architecture and the purchaser of data, each having an equal lay of the claim in different stages of this journey )

  1. Who should be the owner of MGD :

Let me share the multiple and conflicting views  :

The owner of the device which records Data .In essence, the owner of machine-generated data(MGD), is the entity who holds title to the device that records the data. In other words, the entity that owns the IoT device also owns the data produced by that device.

But there could be a  lack of clarity if the device is leased rather than owned.. When real-world constructs such as lease holdings of (say servers) come into play, it indeed gets complex and even murky

The owner is the user of the Data :The other dimension is data may be owned by one party and controlled by another. Possession of data does not necessarily equate to title. Through possession there is control. Title is ownership. Referred to as usage rights, each time data sets are copied, recopied and transmitted, control of the data follows it. There could be cases where the owner of the device could be the user of the data.

 The maker of the Database who essentially invests in aggregating, processing and making the data usable is the owner of the Data :This has a number of buyers of this paradigm . The owner of a smart thermostat does not, for example, own the data about how he uses it. The only thing that is ‘ownable’ is an aggregation or collection of such data provided there has been a relevant investment in carrying out that aggregation or collection (the individual user is very unlikely to have made that investment). The owner here could be the Home automation company. The value which could be generated though this investment could be producing market intelligence, exploiting the insights form data to build market presence and differentiation,

The purchaser of Data could be the owner of the Data: An auto insurance company could buy the  vehicle generated data ( from the makers of automobiles )  and could design a product for  targeted offerings to specific market segments based on say driving behaviour patterns  and  demographics  .This may not be as easy as this seems – refer the url  :  http://joebarkai.com/who-owns-car-data/ which states that the owner of the vehicle and not the maker of the car owns the data collected from the electronic data recorder.

The value chain of who owns the data can be a complex one with multiple claimants. As one aggregates more sources it just gets more complicated. A good example is in the making of smart cities. The sources of data can be from multiple layers and operational areas . City authorities would be making the effort to make use of the data in areas of waste management , traffic congestion , air pollution etc . So does the city authority own the data?

My personal take is , if someone in the MGD value chain is making the data usable  for  a larger good , and  in the process may monetize the data to cover the investments , that entity deserves to  be the owner of the data  as that is where value is generated .

 

Honour the Cornerstones to Cross the Chasm in IoT –  From PoC to Production  Roll Out .

While we are all submerged with data on IoT and the kind of disrupting impact IoT would be bringing with it in the coming years the ground realities are sobering .

To bring home the points as per reports from reputable sources like IoT World and  IoT Analytics show that approximately 7,700 Enterprise IoT projects have been initiated in the since early 2013  – with a large number of projects still in pilot / development phase of the lifecycle. Out of that more that 40 % of the projects (3000) have been initiated in 2016.The key point here is 75% of the projects have not gone beyond the Proof of Concept stage and possibly have been abandoned. This sort of negates the hype on IoT .

There seem to be a handful of projects which are deemed as successful and have gone mainstream. For example, connecting remote industrial equipment (e.g., Rio Tinto’s self-driving trucks in the mining industry) or smart irrigation systems in a Smart City (e.g., Barcelona’s Smart City Initiative) are examples of successes few and far .

While we may argue that the history is still young with a data being collected for the last 3 years  it is imperative to harvest the lessons learnt and utilize them as we embark on ambitious projects and keep the nay sayings at bay .

If we have to “ Cross the Chasm “ we need to understand the why , what and how .

  1. Why are IoT Projects different from other projects in the IT world ?
  2. What are the phases of a typical IoT project  and hurdles to go past ?
  3. How to address these challenges early on ?

1. Why are IoT Projects different from other projects in the IT world ?

a. Multiple technology layers call for multiple skill sets .

On a high level there are 5 major layers of an IoT solution including one cross-layer: Device, Communication, Cloud Services, Applications, and Security.

Developing end-to-end IoT Solutions involves multiple layers that need to work in symphony across disparate technologies. It can be a quite a  struggle to craft viable business plans and implement, integrate, and manage a mix of different and complex IoT technologies, endpoints, platforms, back-end systems and data.

A typical IoT team skill sets will range in the following areas – embedded systems , cloud architecture,  application enablement , data analysis , security design  and base end system integration .One can observe the diversity in competency required .

b. Security and Privacy concerns limit the  business use case driven projects .

In a recent report on IoT based projects, Gartner mentions, “The lack of a compelling business case is a major impediment to growth for enterprises. It remains almost as big an issue as security and privacy. We believe that this is not so much because of a lack of a business case rather that the business cases have yet to be discovered.” This discovery could be muted with the overwhelming issue of security and privacy. This type of concern may not be there in other IT projects given that the exposure to the outside world is a controlled and measured one.

2 .What are steps one goes through in an IoT project and the hurdles that need to be acknowledged and addressed to take the projects to production?

To get the best Return on Investment (ROI) from an IoT initiative, the following 5-step-process has proven as a suitable framework for IoT projects. The steps which are followed would typically be :

  • Business Case Development
  • Build vs. Buy Decision
  • Proof of Concept
  • Initial Pilot Rollout
  • Production  Deployment

a. Business Case Development ( the selection of right use case is critical )

IoT world can be simplistically split into the B2C and the B2B . In the coming years we would see the B2C world would be dominated by the the triumvirate of Apple , Google and Amazon who have the direct access to the consumer in multiple ways . This blog would limit itself to the enterprise (B2B) .Typically; the business case for IoT is handled by a cross-functional team ( business development , leadership and technology ) .It can be a fairly straightforward process if each one takes the roles and responsibilities designated to them . But companies often suffer from insufficient collaboration across the disciplines involved, reversal of roles or a plain lack of focus when it comes to defining the returns of investment .

b.Build vs. Buy Decisions ( it would depend on the budgets allocated and the overall strategy )

As the area is new and evolving approaches could be of two types . In organizations which have got wedded to particular vendor partners they may want to “try out “ Iot business use case on a  the vendor platform. For example is an enterprise is on SAP the business use case could be tried out with SAP . However in enterprises where IoT is under the Center of Excellence ( CoE ) or the Innovation Council with a distinct budget allocated , the approach may be to work on Open source development platforms to “play with “ till a solution of sort emerges.

c. Proof of Concept ( Moving from –“you do not know what you do not know to now you know what you do not know” )

This is an important phase which could decide on Go-No Go for the project The PoC phase is designed to validate a few key points, not every single detail. The best practice has been to just start with 1-5 scenarios or feature designs that matter the most to the customer’s business. In this phase we move from “you do not know what you do not know” to now you know what you do not know “ . Achieving a proof of concept in a specified period of time  can be crucial to sustain top-level management support.

d.Initial Pilot Rollout ( dealing with multiple scenarios while discovering surprises )

Once the concept is proven, it is  time to evolve the scenarios and be ready for surprises on the field . The  IoT solution can be integrated into the broader organization. A big challenge at this stage involves the training of a select group of enthusiastic employees to use the system and preferably at a client location.

e. Production Roll Out

At this point, as the IoT solution is deployed to thousands of devices the manageability and scalability and interoperability with security of the overall systems becomes a key aspect of the overall success.

3.How do we address the challenges early on ?

Crossing the chasm from PoC to Production deployment is something which is key to defining the success . As pointed out 75% of the IoT projects are languishing at PoC stage and have now seen a roll out of sorts .

Now that we are 3 years into having projects in IoT being identified in the enterprise world what are the lessons learnt and how can we “Cross the Chasm”.

a.Choosing the Right Business Use Case :

Gartner’s survey of organizations that have already implemented IoT shows that it has been largely Business Use cases around  internal operations  relating to —improved efficiency, cost savings and enhanced asset utilization—versus the external IoT benefits of enhancing customer experience or increasing revenue would be low hanging and potentially easy to manage with success . Reasons are not hard to find as the case would be in a controlled environment with less security risks and integration challenges.  Two examples  in the B2B world would be connected asset management and connected logistics . BCG has come up with a good research (Winning IoT Jan 2017 ) on this topic as can be read .

b.Managing IoT Security effectively

In September 2016, the world witnessed its largest ever IoT botnet attack through Mirai and more recently Wannacry is another sorry examples of the vulnerabilities we are exposed to caused by  the swathe of poorly protected IoT devices.

Knowing the possible threat ( STRIDES Threats Model ) and addressing the same would be the best way to make a robust beginning .

c.Minimizing  Interoperability Issues.

Given that the IoT architecture is a multilayered one , the issue of interoperability has to be tackled at the Device to Device layer , Device to Server layer and Server to Server Layer .

Device to Device ( Physical )   layer – how bits are transmitted/received over the medium. What radio technologies are supported? For example,  Bluetooth, WiFi, 802.15.4, Cellular, variations of LPWAN or alternatively Ethernet.

Device to Server (Networking) layer –   how the data packets are securely transported from device to cloud. What technologies are required to route data through your networks?

Server to Server (Application ) layer – how the data is taken in and used in your applications. Which open lightweight protocols are supported? For example, MQTT, AMQP, CoAP, Restful HTML, DDS or web-sockets optimized for bursts of small amounts of data.

Integrating manageability functionalities will give the benefit of bringing in scalability to the solution.

Protocol translation at the development stage would play an important role in building interoperability.

Conclusion:

These are early days of IoT projects, the enthusiasm is high, the technology layers are evolving. To accelerate the pace of the evolution and to be cautiously optimistic in our approach it is imperative that we look at these cornerstones which have been gleaned through the leanings of the early adopters. Let us learn from these and prepare well to succeed emphatically.

 

 

The Integrity and Intrigue of IoT Platforms

 

The last count on the number of IoT Platforms which are jostling for space in the crowded IoT market place is 460 . There are analyst reports galore which explain the features and benefits of these platforms.

For an IoT platform to be called as a comprehensive, mature and end to end it has to have the 8 components with reasonable feature and technology depth which make an IoT platform.

To know what these 4 layers are which constitute 8 components. Let us get down to the basics of a typical IoT platform.

Hardware ( Device )  Layer :

  1. This is where data is produced.
  2. This includes the physical devices with their in-built microprocessors, sensors, actuators and communication hardware.
  3. This layer makes the devices “smart”.
  4. The 2 components which go with this layer are :
    1. Connectivity and Normalisation
    2. Device Management

Communication Layer :

  1. This is where data gets transported.
  2. This part of the technology infrastructure ensures the hardware is connected to the network, via proprietary or open-source communication protocols.
  3. This layer gives the data from devices the “expression”
  4. The 2 components which go with this layer are :
    1. Networks/ Infrastructure
    2. Communication Services

Software Backend Layer ( Cloud Services Layer )   :

  1. This is where the data is managed.
  2. It manages the  connected devices and networks while  providing  the necessary  the interface to systems in the eco-system  (e.g., ERP-system).
  3. This layer provides the “knowledge”
  4. The 2 components which go with this layer are :
    1. Database ( RDBMS and noSQL ) for storage and orchestration
    2. Processing , Rules Engine and Events Management

Applications Layer :

  1. This is where data is turned into value.
  2. In the application layer, IoT use cases get presented to the user (B2C or B2B).
  3. This layer is responsible for the “actions” to be taken.
  4. The 2 components here are
    1. Data Visualization and Analytics
    2. External Interfaces( APIs) to third-party systems ( like ERP )

The cross layer is Security layer which cuts across the 4 layers as mentioned.

The nub of the issue is ,  companies just providing cloud storage  or data security or running a CRM software or connectivity management claim to be an IoT platform .

75 % of the IoT platforms which are currently in the market provide connectivity management pretty well which is essentially the communication layer.

This is borne by the figures which are as below:

  1. IoT platforms themselves will not be revenue earners , but they can definitely facilitate revenue growth when they are put in a business context .The total revenue generated through IoT platforms in 2015 is $ 330 Mill and is expected to grow to $ 1.168 Bill in year 2017 ( as per reports from IoT-Analytics ) . Of the 460 platforms which are there only 7 % of the platforms generate $ 10 Mill or above .
  2. McKinsey report says that $11 Trillion is the business value which can be generated through the implementation of IoT programs and that would mean through productivity gains, efficiency of operations and the like .
  3. The fact that 12.5 bill things would be connected by 2020 also makes communication layer with connectivity the strongest contributor to the IoT platform companies.

So what are the parameters one should look at when accepting an IoT platform as a platform in the truest sense of the word (with integrity)?

The assessment of a platform could be considered to fall in two broad areas .They would be:

  1. Depth across the Layers for a Horizontal IoT Platform
  2. Vertical User Specific Industry Focus ( B2C or B2B )

The Depth Across the Layers for a Horizontal IoT Platform would cover:

  • Application Enablement
  • Device Management
  • Cloud Storage
  • Analytics Platforms
  • Connectivity

Currently 75 % of the platforms are in Connectivity layer ( as per IoT-Analytics report )

Vertical User Specific Industry Focus ( B2C or B2B focus )

  • Consumer – Home, Lifestyle , Health , Mobility
  • Business – Retail, health , Energy ,Smart Cities , Manufacturing , Supply Chain , Public Services

The leader here are Energy and Manufacturing (falling under IIoT ).

Next time one comes across an “IoT Platform” one needs to be suitably  intrigued to check the integrity of that claim!!

 

 

The Buzz of Platforms & the Bazaar of IoT Platforms

 

Among the words, phrases and acronyms in the Tech worlds “Platform” seems to be a word which seems to grab the headlines. If one listens to any pitch from a start up venture it would be not uncommon to get the “platform pitch”in at least 1 out of 2 proposals. A lazy search on Google on the “Top 20 Tech weary  words” fetched me the result that “platform was 3rd in the list . (https://www.businessinsider.com.au/the-worlds-top-20-tech-weary-words-for-2014-2014-5).

There have been words verbalised like “Being Platformed” as well and a host of books on the significance of platform in the Technology world. I will not go into the virtues of platform. I would dwell on how the leaders in respective segments  are a few ( a maximum of 3 ) while in the IoT world we seem to have by some counts 170 of them ( McKinsey ) to 400 of them ( Beecham Research).This is definitely a bewildering array to go through and investigate .

What is a Platform – why there are only a few platform leaders ?

Stepping back – different people have different views and meanings of the word “platform”. To get a view of the diversity of platforms we have:

Browsers (Chrome and Firefox) ,smart phone operating systems ( iOS and Android) , blogging  (Word Press , Medium ) .Social Media titans (YouTube, Facebook) and even Instagram are described as platforms. Uber, Airbnb and their ilk are widely described as ‘marketplaces’, ‘platforms’ or ‘marketplace-platforms.’ Web services (Google Payments, Amazon Elastic Cloud) and  gaming consoles (Xbox, Apple’s ipod Touch, Sony Playstation). One interesting point to be  noted that in each category the market is mostly duopolistic .

To accommodate this diversity the safest definition of platform would be as :

  1. An extensible codebase of a software-based system that provides core functionality provided by the modules that interoperate with it, and the interfaces ( aka Application Programming Interface (APIs)) through which they interoperate. In effect this system  abstracts a number of common functions without bringing out the complexity of building and managing them ,  for the users .
  2. The goal is to  enable interactions between producers and the consumers
  3. This is enabled through three layers comprising the Network ( to connect participants to the platform), Technology Infrastructure ( to help create and exchange value )  and Workflow and Data ( thereby matching participants with content , goods and services ) .

This definition brings in the 2 dimensions of a platform. One that would be for internal use and the other for external use .

    1. An internal dimension  for building platforms is to ensure all necessary modules interoperate , and
    2. An external dimension for building platforms is to enable interaction with the outside world and make it as accessible and usable as is possible.

Internal dimension led platforms focus on internal productivity and efficiencies and focus on users. Here the development is internally sourced and is essentially  built for internal use .  The external dimension led platforms focus on the supply (developer side) and the demand (user side) . Essentially they are sometimes termed as “two-sided” platforms .The development beyond a point is crowd-sourced and they enrich the platform and the platform reaches out to them through APIs.

In most of the cases if the external dimension is well evolved then the internalities come with the efficiencies by default; with respect to design quality , selection of interfaces leading to interoperability  , robustness of infrastructure , seamlessness in workflow and data streaming  .

External dimension platforms compete for both users and developers

Here one important aspect to be remembered is a Platform may not be ready to provide solutions to contextual and domain specific problem statements. Applications built around the platform do that, these applications help get the Return on Investment ( RoI ) from the platforms .

In any segment you must have seen that the winners are a few ( atmost 2 or 3  , aspirants may be many, who progressively wither away )  .The reasons has been presented above with respect to design quality , interoperability, infrastructure robustness and seamlessness in workflow and data flow and the last but not the least excellent and friendly user interface . Not many can master all the 4 aspects .These help acquire a critical mass of customer base which keeps growing and a duopoly of sorts is created in the market space .

Successful platforms have the ability to support the variety of business use cases in the present and have strive to  build the  design to evolve over time and be to an extent future ready .

The Bazaar of IoT Platforms- The reasons,  & who would be the winners  wading through the maze ?

Now when coming to Internet of Things (IoT)  , The IoT  movement repeatedly talks about platforms, but those definitions don’t align with any of Uber, Medium or Android. The first issue is interoperability.  And none of these align with each other either.

Now let us address the question is the why of “plethora of platforms” in IoT .

It can be seen clearly that a typical architecture of an IoT solution is multilayered. The layers to simplistically put would be Device to Device ( this involves hardware and firmware with Low Range Communication ) , Device to Server ( which would again involve hardware and communication ) and server to server ( which would mean that cloud based application and long range communication would hold the key along with network , data storage and data visualisation ) .

So we see protocols and standards are driven through their origins from communication technologies ( we see Telecom companies like AT&T and Verizon leading here ) , in the data storage area ( we have Amazon , Google leading the way ) , in the application side ( Azure from Microsoft and Thingworx from PTC being the prominent ones ) . Companies which has a library of business use cases with them given the dominance they have in their respective businesses (namely Bosch , GE , Honeywell ) have the ambition to build their community based platforms .Then we have a host of start ups who run a platform per a business use case they address .

So the genesis of the “plethora of platforms” in the multilayered solution stack of IoT . This adds to complexity and hence no one player can be a leader across the layers as on date .

In the coming  years it could be reckoned that there would be a shakeout in the market and the platforms could veer around key broad based use cases of remote monitoring and environment conditioning , predictive maintenance and process automation .

The ones which will win the battle of supremacy would have cracked the codes of

  1. Security,
  2. Open interfaces,
  3. Carrier grade reliability,
  4. Service levels,
  5. Scalability and
  6. And allow for aa seamless integration into the back-office environment which is essential to the enterprise’s business operations.
  7. With a impressive usability and user interface .

Given the multitier architecture and the attendant complexity it will be a while before a small group of winners starts to bubble to the top . Some of the also-ran aspirants may focus on domains and address a  specific part of the ecosystem in which to play or in the industry segments like home or industrial to justify their presence .

 

The High Potency of Low Power in the IoT Ecosystem

 

In one of the previous blogs “ If data is the hero , is device a super hero “ I had shared the need to provide  device and the device health monitoring due importance as we aggregate data and work on them to whip up insights .

The network characteristics of IoT as based on  wireless technologies are quite different from those for traditional wired or wireless networks because the number of devices participating in communication is very large. In addition, traffic per IoT device is typically not so much because each IoT device senses and transfers a small amount of data to a corresponding IoT server. Although data generated from a huge number of objects may collectively have some impacts on the network performance. Furthermore, IoT networks should operate stably and sustain-ably for a longer period without any need for human intervention

Somewhere in the euphoria , of IoT and the change agent it promises to be, did we forget that we have to  provide  power these constrained devices ( sensors ) ?

All our life we have looked forward to power and performance, be it our music system or the automobile ; power is something we seek in products ( or things ) .

Although the backbone architecture of networks hog the limelight in a typical IoT program  , the actual driver to a successful deployment of the network hinges on the devices at the edge . This “edge effect” is amplified given the numbers and the challenge is exacerbated when  the connectivity and computing being wireless in nature. Just imagine if these have to be kept charged , imagine the form factor of the devices and the energy bills to the boot .

So these  “edge devices “ or the “IoT end points”  need to be

  1. extremely inexpensive ,
  2. with minimal form factor
  3. with low or no demands for power ,
  4. and the need to be autonomous (untouched by human command)

The ideal situation would be to make these edge devices run of any sort of electricity ( or provide access to foraged electrical energy from heat , motion , light that could be converted to electrical energy.

So the device should have the capability to may be work for a decade powered by a small battery and / or through the energy foraged from the ambience it is operating in as mentioned above

Here the aggressive R&D on Low Power or No Power devices ( LO-NO-PO) makes a grand entry to smartly address this conundrum.

Let me go back to high school chapter of electrical energy .Power consumption as we have learnt is the square of Voltage ( P=V^2/R, where R is the resistance  a constant ) .When the operating voltage decreases the corresponding power consumed drops and so also the performance. While this may adversely impact ones music system , but this is what is required precisely for an “edge device”.

Exploiting this has led to the emergence of “sub-threshold “processing. This technique allows the microcontroller to run from a voltage supply lower than the transistor’s switching voltage.The solution which is being perfected is Sub Threshold Power Optimised Technology ( SPOT) which has been pioneered by companies like Ambiq and Minima aided by university led academic research .

While these  products are getting ready to come to mainstream , they are addressing some of the  the challenges of making the devices less sensitive to factors like temperature , noise and other environmental factors .

Will this extreme energy efficiency lead us to a battery-less future ? May be we will be seeing that sooner than later !!

Top 5 reasons why “Internet” in Internet of Things could be a misnomer!!

Let me pose a quiz question –

“What is common between Paris’s Port Neuf & Internet of Things?

———————-(Both are misnomers!!)————————-

.

Port Neuf is the oldest bridge in the city, but its name still means “new bridge.” It was completed in 1607 .

Now let me mention the phrase Internet of Things .Ever wondered how this name was coined?

The term “The Internet of Things” (IoT) was coined by Kevin Ashton in a presentation to Proctor & Gamble in 1999. Ashton is a co-founder of MIT’s Auto-ID Lab. He pioneered RFID use in supply-chain management.

 

The world caught on to this; and I believe with Cisco’s announcement in the later part of the first decade of 21st century, Internet of Everything legitimised this as the phrase.

There were feeble attempts to offer synonyms like “ambient computing”, ubiquitous computing, “M2M computing” “ and the like but nothing stuck on like IoT  .

One of the influential Tech bloggers Daniel Miessler also tossed a few alternatives like Universal daemonization, universal object interaction etc.

If one breaks the phrase into its components it does not tell what is expected to tell. I will explain this later in the document.

My premise is the phrase is not doing an honest job of explaining what exactly Internet of Things , as we now know it is.

Let us take a step back and take a view the traditional internet and how it was built  and compare that with the world of  IoT

  1. End Systems – The end systems in traditional internet would be PCs, laptops, hand- held devices, servers, routers (both manned and unmanned). In the IoT architecture the scope and the breadth of the end systems or devices to be connected is expected to run into billions, and these devices would be “small, dumb, cheap and copious.”. These end devices do not have processors, memory and hard drives which are needed to run a protocol stack.
  2. Flow of Data – The flow of data in traditional internet is bidirectional and is fast given the bandwidth available and high fidelity. In the IoT world the data flow is generally and individually insignificant but in an aggregated manner it would be meaningful and is unidirectional from the device to the server or cloud. Here the communication would be machine to machine and in tiny snatches of data and working in possibly lossy networks. In the traditional internet, the data networks are essentially over-provisioned by design, built with more capacity than is typically required to provide a best effort based service. Protocols like TCP/IP are based on mostly reliable connection between sender and receiver.
  3. Number of Devices and their Management – Numerous reports mention about how humongous the breadth and scope of IoT would be. The end systems or devices would vastly outnumber human beings on the planet – the network so created would be varied and unprecedented. Imagine the moisture sensors being linked to thermostats and occupancy sensors linked to surveillance systems and the like. With the count of devices exploding in the world of IoT, the panacea is thought to be provided by IPv6 as the IP addresses required to manage them would be solved by IPv6 (with its unlimited capacity to churn IP addresses). Providing address is one thing but their management is another.The estimated 700 billion IoT devices cannot be individually managed, they must self-manage. Self-addressing, self-classification and possibly self-healing will be the order of the day in addition the IP addresses.
  4. Human Involvement – The traditional internet is primarily human-to-machine oriented. There is a human at the end of the session. Applications like email, web surfing and video streaming consist of chunky data flowing through high bandwidth pipes to be consumed by humans per session and is bi directional in flow. In the world of IoT this is just the opposite – data is clipped (or terse, yet purposeful) , mostly meaningless when seen individually but making sense in an aggregated manner and the data flow is .unidirectional. The meaningful amount of data individually could be insignificant and random but when aggregated could be important and give a meaningful update. For example, a temperature sensor may generate only few hundred bytes of data when temperature crosses the threshold, otherwise it would be in sleep mode.
  5. Adaptation to Network – Traditional Internet is extremely reliable. There is a significant overprovisioning of bandwidth and redundancy which is built in, at the design and the deployment phase. This provides a high level of services to the internet users, the human beings.

In the world of IoT most of the devices or end systems reside on the extreme edges of the network and the connection may be inconsistent and intermittent. Devices may not  be needed to be kept switched off to conserve power ( as they consume low or no power ) , they must share wireless connections among them. Individual lost messages may not mean much and they could manage well in lossy networks.

Now you may ask- why not TCP/IP for IoT? These protocols which form the heart of traditional Internet, are ill suited for devices geared for IoT. The inherent robustness of these protocols makes them too heavy duty and overhead rich. It may sound odd, but sometimes being capable and reliable may not be something needed in the “awkward world” of IoT.

Internet of Things is an expression which has firmly got entrenched, when essentially it says that devices would be connected and this contentedness makes them behave as computers and hence the things transform into “thinking things”.

Let us all be like Parisians and retain the much acclaimed word “Internet of Things” not withstanding it could be a misnomer !!

 ************************************************************************************

Should the Telecom companies be “shovel providers “or be “gold miners” in the gold rush of IoT?

Let us do a Google search on the  “Top 10 IoT platforms”

http://internetofthingswiki.com/top-10-iot-platforms/634/.

The Top 10 are:

Amazon Web Services  Microsoft Azure IoT, Thing Worx from PTC , Watson IoT from IBM, Jasper from Cisco, Salesforce.com, Carriots , Oracle , Predix from GE and KAA.

In my last blog “If Data is Hero then is Device Super Hero ?” I had elucidated the importance of the poor cousin Device when it comes effectiveness of an IoT program sometimes more than Data.

As we talk about device and data, the device “creates” the data and the data has to be “communicated “. The carrier of data is through a Communication/ Telecom Service Provider  (TSP) .

If creation and communication are the two key things in a successful implementation and deployment of an IoT project why is that none of the Telecom companies are in the top bracket when it comes to building and promoting an IoT Platform.

This is something which is intriguing, to say the least, all would agree.

In this list, we have start-ups, equipment manufacturers, software companies, storage companies, business process management companies, and large manufacturing cos. Telecom companies are sorely missed in this august list!!

This is not to say that Verizon with its “Thingspace” and AT&T with M2X are not making headway but more needs to be done to occupy a center stage.

My premise here is –

“Are not Telecom Companies with their inherent capabilities as service providers over decades  natively best suited to lead this rather than be bit players ?”

Let us do a quick fact finding of why this is so. There are primarily two reasons for this –

A ) Telecom companies can provide the following areas of services namely:

  1. As a connectivity provider by providing communication as a service
  2. Help manage devices and their integration
  3. Distribute Content as a service
  4. Provide Maintenance and support through monitoring and control
  5. Help deploy development environment

 

The following diagram is courtesy https://www.ericsson.com/en/internet-of-things/telecom-service-provider

Drawing from the analogy of Gold Rush and if the scale and speed of growth in IoT can be compared to a Gold Rush, while the gold miners are trying to find gold, someone is trying to provide shovels. Today the telecom companies seem to be doing the latter.

They have to shake up their approach from being a connectivity or infrastructure provider to someone providing information value through service enablement and service creation.

B) IoT infrastructure demands are different – the demand for services is from machines/ devices and not humans.

While we could use the term connectivity and communication with a broad brush. We need to get an understanding of the way communication is dealt with IT by Telecom companies and how different the communication needs of IoT ecosystem is.

While IoT ecosystem requires low bandwidth, over a widely dispersed area for a massive number of devices in the field which could be multiple factors of current capacity, the current service needs which Telecom companies address are for high bandwidth in high-density regions with the need for higher power supply.

So, with an increase in the volume of connected devices service providers need to factor in that IoT devices may communicate very differently compared to smartphones and computers mainly manned by humans.

Some IoT devices tend to exchange relatively small amounts of data and connect and disconnect to the network very infrequently. Examples of this are smart meters (e.g. gas or electricity) providing their latest values to a centralized repository. In contrast, a connected car may exchange diagnostics information to this central hub while also offering mobile broadband services for in-car entertainment, thereby exchanging a lot of data over the mobile connection for a longer period of time.

This difference in ‘IoT endpoint’ behavior places very different demands on both the network as well as the data center responsible for processing and hosting this information. For example, a 4G network is very suitable for the connected car use case, but may not be the best choice for the smart metering scenario.

Smart metering only requires a low bandwidth channel that can be accessed with minimal power consumption.

C) Need for Short Range and Long Range Communication in IoT is best addressed by Telecom Service Providers.

Telecom service providers ( TSP) are currently rolling out low-power WAN networks (LP-WAN) such as LoRa or Sigfox which will work alongside traditional 3G/4G networks and which cater to those IoT applications that require very low bandwidth and low power consumption so the battery lifespan of the IoT device can last several years.

On the data center side, adopting cloud technologies is critical. The ability to quickly spin up a virtual environment delivering both the network functionalities as well as the IoT platform functionalities addressing the specifics to each IoT use case is crucial. Indeed, due to the wide variety of IoT use cases, there is no one-size-fits-all approach.

As the promised-land of the Internet of Things approaches, TSP is best positioned to become the facilitators and engine-room of this super-connected world.

Connecting IoT devices is one thing, securing them and securing the applications they connect to is another. TSPs have become much more security-aware in recent years as cyber and DDOS attacks have impacted other areas of their business.

Given that TSPs,

  1. Can handle Short-Range Communication and Long-Range communication through the cloud-based infrastructure and
  2. Have  a better knowledge of handling security and privacy given the years of experience behind them

It would not be a surprise of TSPs take the lead here in building comprehensive IoT platforms and aspire to be market leaders rather than being fast followers and bit players.

 

 

 

 

If Data is the Hero , is Device the Super Hero?

 

To monetize any engagement or a project, data is a critical element. Data needs to be cleaned, aggregated, ingested, modeled and analyzed to provide business insights which are actionable. A project which is addressing defined business use case would be meaningful if it provides us insights. IoT projects are multi-dimensional, they cover layers like Device – Device, Device-Server, and Server-Server when it comes to data and its flow .

A typical project in  IoT gets represented as a complex system of networks, platforms, interfaces, protocols, devices, and data. IoT devices range from sensors, actuators, gateways, and embedded hardware/software within products and assets.

The number and type of IoT devices, as well as the associated use cases for apps and services, will grow exponentially within leading industry verticals. One of the critical success factors for IoT operation which could well be termed as Operational Support Systems (OSS) for IoT is  IoT Device Management.

If devices are not managed well , the output data would be a suspect and the project could well be a failed one. The need for IoT device management is paramount and it could be a matter of concern or an opportunity to differentiate as well if addressed with care

Fundamentally IoT Device Management would cover areas like

  1. Provisioning and Authentication
  2. Configuration and Control
  3. Software Updation and Maintenance
  4. Monitoring and Diagnostics

I would not slot Security here as it is a separate subject in its own way.

Of the 4 areas listed 1, 2 and 3 can be done with a degree of control as we initiate the project. The last point of Monitoring and Diagnostics is the most critical as the level of control which needs to be brought in, is the most complex. This is given the diversity and the scale in terms of a number of devices, the corresponding protocols, the associated challenges of interoperability. Then there is the need to replicate the problems and take corrective or predictive actions.

Data may have the flamboyance of a hero in the IoT story, but the real workhorses are the devices which work at the edge of the IoT system—the “Things” in the Internet of Things” . The devices spew out the data and hence a healthy device would provide honest data.

Devices out in the field are either generating and transmitting data to a centralized platform ( one-way movement) or performing automated tasks that generate data. A mundane job, perhaps, yet the overall performance of a system often hinges on the health of field devices.

Imagine running a critical operation of managing a fleet of trucks managing a cold chain shipping perishable goods. If a device, sensor, embedded agent, or gateway begins to falter, and more importantly has not been monitored well enough and corrective actions not taken,  the consequences could be  dire  and contractual impact could be disastrous .

The challenge of maintaining devices may sound basic compared with aggregating and analysing data, but it’s essential to a successful IoT strategy.

  • So what is the factor which make the devices vulnerable and hence the output data could be unreliable?

The key factor here is RF Technology .

Most IoT devices rely on radio frequency (RF) technology such as Bluetooth, ZigBee and Wi-Fi for communications. Otherwise known as far-field transmission, RF is great when communicating over long distances, but becomes problematic when applied to short-range, isolated IoT ecosystems, like the wireless personal area network.

Link and network security become increasingly difficult as the number of any RF devices increases as per research reports. The relentless requirement for decreasing power consumption in devices translates to less room for handshake and encryption protocols. These issues are clearly reflected by Bluetooth’s increasingly poor reliability and security record.

Then it is seen that RF-based devices are shutting each other down due to interference, a situation that will grow worse when the IoT industry grows by the billions.

  • So how would these vulnerabilities get addressed?

An alternative which will come mainstream is  Near Field Magnetic Induction (NFMI) for RF. NFMI uses the modulations of magnetic fields to transfer data wirelessly between two points. Its main strength is its attenuation. It decays a thousand times faster than RF signals, which eliminates much of the interference and security issues that are attributed to technologies such as Bluetooth.

NFMI will prove its worth in a new way in the age of IoT as it marches to mainstream adoption.

  • How will Operational Support System metamorphose?

As devices explode in numbers human cannot control a billion nodes connected in a wide-area however centralized the remote based management it may be .

Here the deployment of machine learning for the development of a dynamic, automated network management framework would be key. Industry is coming up with proprietary algorithms to provides real-time distributed system control and self-management and self-healing capabilities for huge long-range IoT networks consisting of billions of smart devices and sprawling across millions of square miles. The system uses trained neural networks and Bayesian methods to optimize the interaction of nodes and IoT gateways on the network.

Hence in conclusion the implications of scale of the increasingly connected world could be scary, whoever would master the management of the explosion of devices would be the winner and the ones who cannot could well be buried under the weight of the devices. A service line , AI infused Operational Support System will develop .

 

Phones are a wonder of Inter-operability, can “Things” emulate ?

Posted on June 22, 2017 by somjitamrit

Ever wondered that be it a wired (landline  ) phone or a wireless  ( cellular ) phone, once a call is made from it, it works, effortlessly? Then why are we making a big deal of the seamless connectivity of the “Things” in IoT when it comes to Interoperability.

Before we go to Interoperability, let us deliberate for a moment of Standards. Many a time, these two words are lazily referred to interchangeably. But there is a difference and it is a massive one.

Let us deliberate why equating a standard with interoperability is a fallacy of sorts.  Reaching to analogy, a standard is like a language.  So we could define English, or German or Mandarin as standards.  The standards bodies would then claim that everyone who speaks the same language is interoperable.  The language defines the grammar and the vocabulary but does nothing to promote interoperability.

Interoperability is about working together seamlessly.  To achieve that requires more than just a standard.  It needs a set of interoperability tests and the testing tools to confirm compliance with those tests.  These don’t generally come with a standard – they need to be put in place to support it.  That entails time and money, which means most standards can’t support them until they’re already fairly well established.

Now let us come back to our example of the phone. This is a simpler interoperability problem that it might first appear, because phones do not connect directly to each other.  Instead each connects with the network infrastructure, which transfers the data between the two (or more) appropriate handsets.  Handsets and base stations have to conform to industry standards.  For most phones those are defined by ETSI – the European Telecoms Standards Institute, which is responsible for the GSM and 3G/4G standards which are widely used.  It’s not the only standard, but it accounts for over 8 billion connections throughout the world.

Despite those numbers, there are not that many different mobile phones, and even fewer different base stations that they connect to because the industry is controlled by a relatively small number of companies thanks to the consolidation.  One of the reasons for the small number of companies is the cost of implementing the standards, plus the cost of testing them.  Before a mobile phone is brought to market it needs to pass a stringent set of qualification tests, which can cost anything up to a $ 1 million.  At that point, it becomes legal to sell it.  However, before a network operator will sell it to you, they insist on it is passing a further set of interoperability tests.

Now let us bring the focus to the “Things” and why Interoperability is a must for IoT to go mainstream.

As the number of things getting connected from the physical world to the digital world on the Internet, starts to grow, the task of testing everything against everything else becomes impossible, as it grows factorially.  Major manufacturers will still perform extensive testing of their flagship products, but in general, interoperability starts to take a nosedive.  The other thing that happens is that as more and more manufacturers start to write protocol stacks and profiles, each tends to deviate slightly from the standard, because of minor differences in interpretation and implementation.  Rather than testing these rigorously against the specification, effort tends to be put into ensuring interoperability with what each manufacturer sees as the market leading product.  That results in more patches to make their stack work.  If in turn, they become successful, other manufacturers will do the same thing against that product, running the risk that de facto implementations start to diverge from the specification.

This has resulted in a half a dozen standards bodies or consortium which would like to set the standards and their success would be in roping in more manufacturers of physical devices which would get connected. The more they have they can make it a leading standard to which others have to “interoperate”.

Therefore we find; Industrial Internet Consortium (IIC) led by GE ( a manufacturing behemoth ) but not its competitor Honeywell in the consortium.

Internet of Things Consortium (IOTC) led by Verizon ( a Global Telecom Giant), but its peer AT&T, is absent.Open Internet Consortium (OIC) led by CISCO andIntel ( Leading Network and Chip manufacturer) but not Juniper or AMD

Open Internet Consortium (OIC) led by CISCO and Intel ( Leading Network and Chip manufacturers ) but not Juniper or AMD  do not make it to the list.

Allseen which started with Qualcomm and Microsoft as principals,  has a healthy partnership of appliances / white goods companies like LG, Sharp, but not Samsung and Electrolux).

Interestingly we will see consolidation here sooner than later and the good sign is the merger of sorts of OIC and IIC.

Will this see the blurring of the definition of Interoperability and Standards!! Let us see, but it will not be far into the future !!