Skip to content
SM Tutorials & Certification

Sugu Content

What is IoE?

The Internet of Everything (IoE) is bringing together people, process, data, and things to make networked connections more relevant and valuable than ever before-turning information into actions that create new capabilities, richer experiences, and unprecedented economic opportunity for businesses and individuals.

The Internet of Everything (IoE) is a broad term that refers to devices and consumer products connected to the internet and outfitted with expanded digital features. It is a philosophy in which technology’s future is comprised of many different types of appliances, devices and items connected to the global internet.

The Internet of Everything (IoE) is a concept that extends the Internet of Things (IoT) emphasis on machine-to-machine (M2M) communications to describe a more complex system that also encompasses people and processes.

Today, IoE is considered a superset of IoT. … Oftentimes, when people refer to IoT, they are actually discussing the IoE. A simple way to know is that the IoT is simply what it says — things — whereas the Internet of Everything builds on top of IoT combining people, process, data, and things.

Difference between internet of everything (IoE) and internet of things (IoT) is in the intelligent connection. Internet of things mostly about physical objects and concepts communicating with each other but internet of everything is what brings in network intelligence to bind all these concepts into a cohesive system.

The Internet of Everything is the intelligent connection of people, process, data and things.” It doesn’t make a lot of sense to most people but the fine line of difference between internet of everything (IoE) and internet of things (IoT) is in the intelligent connection.

 

IoE is the new trend in IT Operations. It stands for Artificial Intelligence for IT Operations (previously it was referred as “Algorithmic IT Operations Analytics”). The term refers to IT operations platforms that use Artificial Intelligence. Many of these platforms are cloud services.

IoE is the use of advanced algorithms and artificial intelligence techniques for analyzing big data from various IT and business operations tools, to speed service delivery, increase IT efficiency and deliver a superior user experience.

IoE enables a move away from siloed operations management and provides intelligent insights that drive automation and collaboration for continuous improvement.

 

What is Artificial Intelligence

What does Artificial Intelligence (AI) mean? Artificial intelligence (AI) is an area of computer science that emphasizes the creation of intelligent machines that work and react like humans. Some of the activities computers with artificial intelligence are designed for include: Speech recognition.

 

Java, Python, Lisp, Prolog, and C++ are major AI programming language used for artificial intelligence capable of satisfying different needs in the development and designing of different software.

In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans

On the other hand, some programs have attained the performance levels of human experts and professionals in performing certain specific tasks, so that artificial intelligence in this limited sense is found in applications as diverse as medical diagnosis, computer search engines, and voice or handwriting recognition.

 

Strategic Technology Trends for 2020

Autonomous Things

Autonomous things, such as robots, drones and autonomous vehicles, use AI to automate functions previously performed by humans. Their automation goes beyond the automation provided by rigid programing models and they exploit AI to deliver advanced behaviors that interact more naturally with their surroundings and with people.

As autonomous things proliferate, we expect a shift from stand-alone intelligent things to a swarm of collaborative intelligent things, with multiple devices working together, either independently of people or with human input. For example, if a drone examined a large field and found that it was ready for harvesting, it could dispatch an “autonomous harvester.” Or in the delivery market, the most effective solution may be to use an autonomous vehicle to move packages to the target area. Robots and drones on board the vehicle could then ensure final delivery of the package.

Augmented Analytics

Augmented analytics focuses on a specific area of augmented intelligence, using machine learning (ML) to transform how analytics content is developed, consumed and shared. Augmented analytics capabilities will advance rapidly to mainstream adoption, as a key feature of data preparation, data management, modern analytics, business process management, process mining and data science platforms. Automated insights from augmented analytics will also be embedded in enterprise applications — for example, those of the HR, finance, sales, marketing, customer service, procurement and asset management departments — to optimize the decisions and actions of all employees within their context, not just those of analysts and data scientists. Augmented analytics automates the process of data preparation, insight generation and insight visualization, eliminating the need for professional data scientists in many situations.

This will lead to citizen data science, an emerging set of capabilities and practices that enables users whose main job is outside the field of statistics and analytics to extract predictive and prescriptive insights from data. Through 2020, the number of citizen data scientists will grow five times faster than the number of expert data scientists. Organizations can use citizen data scientists to fill the data science and machine learning talent gap caused by the shortage and high cost of data scientists.

 

AI-Driven Development

The market is rapidly shifting from an approach in which professional data scientists must partner with application developers to create most AI-enhanced solutions to a model in which the professional developer can operate alone using predefined models delivered as a service. This provides the developer with an ecosystem of AI algorithms and models, as well as development tools tailored to integrating AI capabilities and models into a solution. Another level of opportunity for professional application development arises as AI is applied to the development process itself to automate various data science, application development and testing functions. By 2022, at least 40 percent of new application development projects will have AI co-developers on their team.

Ultimately, highly advanced AI-powered development environments automating both functional and nonfunctional aspects of applications will give rise to a new age of the ‘citizen application developer’ where nonprofessionals will be able to use AI-driven tools to automatically generate new solutions. Tools that enable nonprofessionals to generate applications without coding are not new, but we expect that AI-powered systems will drive a new level of flexibility,”

 

Digital Twins

digital twin refers to the digital representation of a real-world entity or system. By 2020, it is estimated there will be more than 20 billion connected sensors and endpoints and digital twins will exist for potentially billions of things. Organizations will implement digital twins simply at first. They will evolve them over time, improving their ability to collect and visualize the right data, apply the right analytics and rules, and respond effectively to business objectives.

One aspect of the digital twin evolution that moves beyond IoT will be enterprises implementing digital twins of their organizations (DTOs). A DTO is a dynamic software model that relies on operational or other data to understand how an organization operationalizes its business model, connects with its current state, deploys resources and responds to changes to deliver expected customer value. DTOs help drive efficiencies in business processes, as well as create more flexible, dynamic and responsive processes that can potentially react to changing conditions automatically.

Empowered Edge

Edge refers to endpoint devices used by people or embedded in the world around us. Edge computing describes a computing topology in which information processing, and content collection and delivery, are placed closer to these endpoints. It tries to keep the traffic and processing local, with the goal being to reduce traffic and latency.

In the near term, edge is being driven by IoE and the need keep the processing close to the end rather than on a centralized cloud server. However, rather than create a new architecture, cloud computing and edge computing will evolve as complementary models with cloud services being managed as a centralized service executing, not only on centralized servers, but in distributed servers on-premises and on the edge devices themselves.

Over the next five years, specialized AI chips, along with greater processing power, storage and other advanced capabilities, will be added to a wider array of edge devices. The extreme heterogeneity of this embedded IoE world and the long life cycles of assets such as industrial systems will create significant management challenges. Longer term, as 5G matures, the expanding edge computing environment will have more robust communication back to centralized services. 5G provides lower latency, higher bandwidth, and (very importantly for edge) a dramatic increase in the number of nodes (edge endpoints) per square km.

 

Immersive Experience

Conversational platforms are changing the way in which people interact with the digital world. Virtual reality (VR), augmented reality (AR) and mixed reality (MR) are changing the way in which people perceive the digital world. This combined shift in perception and interaction models leads to the future immersive user experience.

 

Over time, we will shift from thinking about individual devices and fragmented user interface (UI) technologies to a multichannel and multimodal experience. The multimodal experience will connect people with the digital world across hundreds of edge devices that surround them, including traditional computing devices, wearables, automobiles, environmental sensors and consumer appliances. The multichannel experience will use all human senses as well as advanced computer senses (such as heat, humidity and radar) across these multimodal devices. This multiexperience environment will create an ambient experience in which the spaces that surround us define “the computer” rather than the individual devices. In effect, the environment is the computer.

Blockchain

Blockchain, a type of distributed ledger, promises to reshape industries by enabling trust, providing transparency and reducing friction across business ecosystems potentially lowering costs, reducing transaction settlement times and improving cash flow. Today, trust is placed in banks, clearinghouses, governments and many other institutions as central authorities with the “single version of the truth” maintained securely in their databases. The centralized trust model adds delays and friction costs (commissions, fees and the time value of money) to transactions. Blockchain provides an alternative trust mode and removes the need for central authorities in arbitrating transactions.

 

Current blockchain technologies and concepts are immature, poorly understood and unproven in mission-critical, at-scale business operations. This is particularly so with the complex elements that support more sophisticated scenarios.

 

 

 

 

 

 

 

Smart Spaces

A smart space is a physical or digital environment in which humans and technology-enabled systems interact in increasingly open, connected, coordinated and intelligent ecosystems. Multiple elements — including people, processes, services and things — come together in a smart space to create a more immersive, interactive and automated experience for a target set of people and industry scenarios.

This trend has been coalescing for some time around elements such as smart citiesdigital workplaces, smart homes and connected factories. We believe the market is entering a period of accelerated delivery of robust smart spaces with technology becoming an integral part of our daily lives, whether as employees, customers, consumers, community members or citizens.

 

Digital Ethics and Privacy

Digital ethics and privacy is a growing concern for individuals, organizations and governments. People are increasingly concerned about how their personal information is being used by organizations in both the public and private sector, and the backlash will only increase for organizations that are not proactively addressing these concerns.

Any discussion on privacy must be grounded in the broader topic of digital ethics and the trust of your customers, constituents and employees. While privacy and security are foundational components in building trust, trust is actually about more than just these components. Trust is the acceptance of the truth of a statement without evidence or investigation. Ultimately an organization’s position on privacy must be driven by its broader position on ethics and trust. Shifting from privacy to ethics moves the conversation beyond ‘are we compliant’ toward ‘are we doing the right thing”.

 

Quantum Computing

Quantum computing (QC) is a type of nonclassical computing that operates on the quantum state of subatomic particles (for example, electrons and ions) that represent information as elements denoted as quantum bits (qubits). The parallel execution and exponential scalability of quantum computers means they excel with problems too complex for a traditional approach or where a traditional algorithms would take too long to find a solution. Industries such as automotive, financial, insurance, pharmaceuticals, military and research organizations have the most to gain from the advancements in QC. In the pharmaceutical industry, for example, QC could be used to model molecular interactions at atomic levels to accelerate time to market for new cancer-treating drugs or QC could accelerate and more accurately predict the interaction of proteins leading to new pharmaceutical methodologies.

“CIOs and IT leaders should start planning for QC by increasing understanding and how it can apply to real-world business problems. Learn while the technology is still in the emerging state. Identify real-world problems where QC has potential and consider the possible impact on security. But don’t believe the hype that it will revolutionize things in the next few years. Most organizations should learn about and monitor QC through 2022 and perhaps exploit it from 2023 or 2025.

 

Key Concepts

API Monitoring

An API is a tool that developers can use, where they provide certain data and consume services that it provides. It lists a set of operations developers can make use of, and it describes the functions.

If you are a developer, you don‘t need to know how APIs do their work, you just use them. Well-known platforms nowadays have many APIs offering standard functions, so that you don‘t have to code them.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Business Analysis

IoE Business Analysis

 

PENDING WRITING THE INTRODUCTION

We are now so used to seeing dashboards, that we forget Business Analysis, Business Analytics and Business Intelligence are simply different facets of the same underlying ‘movement’. To visualize. To explore. To discover.

And as the Internet of Everything (IoE) brings more intelligence to more devices which pervade invisibly into every aspect of our lives, Business Analysts will need to factor their existence into the Use Cases that drive business today:-

 

  • Smart shelvesin supermarkets that reduce understocking by comparing the measured popularity of items to the sales figures for that item.
  • Smart boarding passesthat check up on lost passengers who according to the FAA cost airlines USD 28 billion in 2018 alone.
  • Smart windowswith embedded IoE sensors to measure light incident on building facades and to change the glass transparency, reducing the incoming heat that balloons the air conditioning costs in commercial buildings.

As you can guess, all this technology is going to have a major impact on peoples’ lives, wherever we are; at home, at work or at play.

The work of the Business Analyst will be to figure out how to provide a seamless fit between the customer experience, the burgeoning IoE technologies and the continuous drive for business operational efficiency.

 

 

The companies that are really driving IoE are startups. And by a startup, we mean the hungry, bootstrapped, seat-of-the-pants type companies, founded by 15-year-olds in their parent’s garage. In fact, this has been the case since the days of Microsoft and Apple. Nothing has changed.

Recent IoE startup success stories include:-

  • Nest(bought by Google for USD 3.2 Billion, in cash)
  • Fleetmatics(bought by Verizon for USD 2.4 Billion)
  • Ring(bought by Amazon for USD 1 Billion)

The startups listed here have all shown the potential to deploy cloud-based IoE technologies that can change lives.

The really interesting thing is how such tech can change the way Business Analysts view the user persona.

 

Business analysis involves principally two things:-

  1. Understanding the business issues of most importance (problems and opportunities).
  2. Proposing changes to improve the business.

The first step remains the same, irrespective of the technologies available and involves interviewing stakeholders to understand strategic goals, business processes, use cases and a conceptual data model, culminating in a detailed statement of requirements.

It is during the second step that BAs need to map those requirements to a set of features which can be implemented within the project budget and project timescale. All this needs to be driven by the User Story.

So, when your supermarket client complains that customers cannot find nappies on the shelves because they are understocked, BAs must work with technologists to find solutions, which could include IoE-based smart shelves.

But how will a ‘smart shelf’ affect the customer buying experience?

If personalized discounts are also shown alongside the merchandise, will customers realise that certain behaviours can trigger a larger discount?

Will the technology drive people away because it is deemed ‘too salesy’.

Let’s take another example:

What if your airline client complains about missing passengers who delay flights and cause excessive costs and even regulatory fines?

BAs would need to bring together IT and Operations that may need to explore the benefits of smart boarding cards.

But will such devices constitute an invasion of privacy?

Should they be available on existing smartphone platforms or on a dedicated piece of hardware so you can track passengers’ location within an airport?

What will the delay be in recovering the physical hardware during boarding ?

 

 

 

Government Technology Trends for 2020

They were selected in response to pressing public policy goals and business needs of government organizations in jurisdictions around the globe. They fit into a broader set of macro-trends that demand the attention of today’s government leaders, including social instability, perpetual austerity, an aging population, rising populism and the need to support sustainability goals that have the potential to optimize or transform public services.

 

Adaptive Security

An adaptive security approach treats risk, trust and security as a continuous and adaptive process that anticipates and mitigates constantly evolving cyber threats. It acknowledges there is no perfect protection and security needs to be adaptive, everywhere, all the time.

 

 

Citizen Digital Identity

Digital identity is the ability to prove an individual’s identity via any government digital channel that is available to citizens. It is critical for inclusion and access to government services, yet many governments have been slow to adopt them. Government CIOs must provision digital identities that uphold both security imperatives and citizen expectations.

 

 

Multichannel Citizen Engagement

Governments that meet citizens on their own terms and via their preferred channels, such as in person, by phone, via mobile device through smart speakers, chatbots or via augmented reality, will meet citizen expectations and achieve program outcomes. According to a 2018 survey, more than 50% of government website traffic now comes from mobile devices.

 

Agile by Design

Digital government is not a “set and forget” investment. CIOs must create a nimble and responsive environment by adopting an agile-by-design approach, a set of principles and practices used to develop more agile systems and solutions that impact both the current and target states of the business, information and technical architecture.

 

Digital Product Management

More than two-thirds of government CIOs said they already have, or are planning to implement, digital product management (DPM). Often replacing a “waterfall” project management approach, which has a poor track record of success, DPM involves developing, delivering, monitoring, refining and retiring “products” or offerings for business users or citizens. It causes organizations to think differently and delivers tangible results more quickly and sustainably.

 

Anything as a Service (XaaS)

XaaS covers the full range of IT services delivered in the cloud on a subscription basis. The CIO Survey found that 39% of government organizations plan to spend the greatest amount of new or additional funding in cloud services. The XaaS model offers an alternative to legacy infrastructure modernization, provides scalability and reduces time to deliver digital government services.

 

Shared Services 2.0

Many government organizations have tried to drive IT efficiencies through centralization or sharing of services, often with poor results. Shared services 2.0 shifts the focus from cost savings to delivering high-value business capabilities such as such as enterprisewide security, identity management, platforms or business analytics.

 

 

Digitally Empowered Workforce

digitally enabled work environment is linked to employee satisfaction, retention and engagement — but the government currently lags other industries in this area. A workforce of self-managing teams needs the training, technology and autonomy to work on digital transformation initiatives.

 

Analytics Everywhere

It refers to the pervasive use of analytics at all stages of business activity and service delivery as analytics everywhere. It shifts government agencies from the dashboard reporting of lagging indicators to autonomous processes that help people make better decisions in real time.

 

Augmented Intelligence

It recommends that government CIOs reframe artificial intelligence as “augmented intelligence” a human-centered partnership model of people and artificial intelligence working together to enhance cognitive performance.

 

Strategic Technology Trends for 2020

Hyperautomation

Hyperautomation is the combination of multiple machine learning (ML), packaged software and automation tools to deliver work. Hyperautomation refers not only to the breadth of the pallet of tools, but also to all the steps of automation itself (discover, analyze, design, automate, measure, monitor and reassess). Understanding the range of automation mechanisms, how they relate to one another and how they can be combined and coordinated is a major focus for hyperautomation.

This trend was kicked off with robotic process automation (RPA). However, RPA alone is not hyperautomation. Hyperautomation requires a combination of tools to help support replicating pieces of where the human is involved in a task.

 

Multiexperience 

Through 2028, the user experience will undergo a significant shift in how users perceive the digital world and how they interact with it. Conversational platforms are changing the way in which people interact with the digital world. Virtual reality (VR), augmented reality (AR) and mixed reality (MR) are changing the way in which people perceive the digital world. This combined shift in both perception and interaction models leads to the future multisensory and multimodal experience.

“The model will shift from one of technology-literate people to one of people-literate technology. The burden of translating intent will move from the user to the computer,” said Brian Burke, research vice president at Gartner. . “This ability to communicate with users across many human senses will provide a richer environment for delivering nuanced information.”

 

Democratization of Expertise

Democratization is focused on providing people with access to technical expertise (for example, ML, application development) or business domain expertise (for example, sales process, economic analysis) via a radically simplified experience and without requiring extensive and costly training. “Citizen access” (for example, citizen data scientists, citizen integrators), as well as the evolution of citizen development and no-code models, are examples of democratization.

Through 2023, Gartner expects four key aspects of the democratization trend to accelerate, including democratization of data and analytics (tools targeting data scientists expanding to target the professional developer community), democratization of development (AI tools to leverage in custom-developed applications), democratization of design (expanding on the low-code, no-code phenomena with automation of additional application development functions to empower the citizen-developer) and democratization of knowledge (non-IT professionals gaining access to tools and expert systems that empower them to exploit and apply specialized skills beyond their own expertise and training).

 

Human Augmentation 

Human augmentation explores how technology can be used to deliver cognitive and physical improvements as an integral part of the human experience. Physical augmentation enhances humans by changing their inherent physical capabilities by implanting or hosting a technology element on their bodies, such as a wearable device. Cognitive augmentation can occur through accessing information and exploiting applications on traditional computer systems and the emerging multiexperience interface in smart spaces. Over the next 10 years increasing levels of physical and cognitive human augmentation will become prevalent as individuals seek personal enhancements. This will create a new “consumerization” effect where employees seek to exploit their personal enhancements — and even extend them — to improve their office environment.

Transparency and Traceability

Consumers are increasingly aware that their personal information is valuable and are demanding control. Organizations recognize the increasing risk of securing and managing personal data, and governments are implementing strict legislation to ensure they do. Transparency and traceability are critical elements to support these digital ethics and privacy needs.

Transparency and traceability refer to a range of attitudes, actions and supporting technologies and practices designed to address regulatory requirements, preserve an ethical approach to use of artificial intelligence (AI) and other advanced technologies, and repair the growing lack of trust in companies. As organizations build out transparency and trust practices, they must focus on three areas: (1) AI and ML; (2) personal data privacy, ownership and control; and (3) ethically aligned design.

 

The Empowered Edge 

Edge computing is a computing topology in which information processing and content collection and delivery are placed closer to the sources, repositories and consumers of this information. It tries to keep the traffic and processing local to reduce latency, exploit the capabilities of the edge and enable greater autonomy at the edge.

“Much of the current focus on edge computing comes from the need for IoT systems to deliver disconnected or distributed capabilities into the embedded IoT world for specific industries such as manufacturing or retail,” “However, edge computing will become a dominant factor across virtually all industries and use cases as the edge is empowered with increasingly more sophisticated and specialized compute resources and more data storage. Complex edge devices, including robots, drones, autonomous vehicles and operational systems will accelerate this shift.”

 

 

 

Distributed Cloud

A distributed cloud is the distribution of public cloud services to different locations while the originating public cloud provider assumes responsibility for the operation, governance, updates to and evolution of the services. This represents a significant shift from the centralized model of most public cloud services and will lead to a new era in  cloud computing.

 

Autonomous Things

Autonomous things are physical devices that use AI to automate functions previously performed by humans. The most recognizable forms of autonomous things are robots, drones, autonomous vehicles/ships and appliances. Their automation goes beyond the automation provided by rigid programing models, and they exploit AI to deliver advanced behaviors that interact more naturally with their surroundings and with people. As the technology capability improves, regulation permits and social acceptance grows, autonomous things will increasingly be deployed in uncontrolled public spaces.

“As autonomous things proliferate, we expect a shift from stand-alone intelligent things to a swarm of collaborative intelligent things where multiple devices will work together, either independently of people or with human input,”. “For example, heterogeneous robots can operate in a coordinated assembly process. In the delivery market, the most effective solution may be to use an autonomous vehicle to move packages to the target area. Robots and drones aboard the vehicle could then affect final delivery of the package.”

 

 

Practical Blockchain

Blockchain has the potential to reshape industries by enabling trust, providing transparency and enabling value exchange across business ecosystems, potentially lowering costs, reducing transaction settlement times and improving cash flow. Assets can be traced to their origin, significantly reducing the opportunities for substitutions with counterfeit goods. Asset tracking also has value in other areas, such as tracing food across a supply chain to more easily identify the origin of contamination or track individual parts to assist in product recalls. Another area in which blockchain has potential is identity management. Smart contracts can be programmed into the blockchain where events can trigger actions; for example, payment is released when goods are received.

Blockchain remains immature for enterprise deployments due to a range of technical issues including poor scalability and interoperability. Despite these challenges, the significant potential for disruption and revenue generation means organizations should begin evaluating blockchain, even if they don’t anticipate aggressive adoption of the technologies in the near term,”

 

AI Security

AI and ML will continue to be applied to augment human decision making across a broad set of use cases. While this creates great opportunities to enable hyperautomation and leverage autonomous things to deliver business transformation, it creates significant new challenges for the security team and risk leaders with a massive increase in potential points of attack with IoT, cloud computing, microservices and highly connected systems in smart spaces. Security and risk leaders should focus on three key areas — protecting AI-powered systems, leveraging AI to enhance security defense, and anticipating nefarious use of AI by attackers.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Architecture

IoE Architecture

PENDING WRITING THE INTRODUCTION

IoE architectures can be described as independent IoE ecosystems that can be physical, virtual or a hybrid mix of the two. They consist of a list of active physical devices, sensors, actuators, services, communication protocols and layers, final users, developers and interface layers. Several functional blocks are defined in an IoE system, even if a commonly agreed conceptualization is not found, but several different approaches are usual considered: a three-layer architecture constituted by Application, Network and Perception layers; a five-layer architecture including also Business and Process layers; cloud and fog systems ; and social IoE paradigms. The development of next generation IoE infrastructures is still at its early stage and relevant progress is expected to be made in the next years. By 2020 up to 20.4 billion IoT/IoE devices will be connected together. New tools to support the new paradigm are needed: smart management of the resources, better security for population, healthcare and engagement of the citizens in their everyday activities are some examples of the different scenarios that can be unlocked with an embedded IoE ecosystem support.

 

Basically, there are three IoE architecture layers:

  1. The client side (IoE Device Layer)
  2. Operators on the server side (IoE Getaway Layer)
  3. A pathway for connecting clients and operators (IoE Platform Layer)

 

 

In essence, IoE architecture is the system of numerous elements: sensors, protocols, actuators, cloud services, and layers. Given its complexity, there exist 4 stages of IoE architecture. Such a number is chosen to steadily include these various types of components into a sophisticated and unified network.

Which are the layers of IoE service architecture?

 

This architecture consists of three layers: Perception Layer, Network Layer, and Application layer (Figure 3). A brief description of each layer is given: Perception Layer: the main task of the perception layer is to perceive the physical properties of things around us that are part of the IoE.

 

An Overview of the Main Stages in the IoE Architecture Diagram

In simple terms, the 4 Stage IoE architecture consists of

  1. Sensors and actuators
  2. Internet getaways and Data Acquisition Systems
  3. Edge IT
  4. Data center and cloud.

The detailed presentation of these stages can be found on the diagram below.

 

Stage 1. Networked things (wireless sensors and actuators)

The outstanding feature about sensors is their ability to convert the information obtained in the outer world into data for analysis. In other words, it’s important to start with the inclusion of sensors in the 4 stages of an IoE architecture framework to get information in an appearance that can be actually processed.

For actuators, the process goes even further — these devices are able to intervene the physical reality. For example, they can switch off the light and adjust the temperature in a room.

Because of this, sensing and actuating stage covers and adjusts everything needed in the physical world to gain the necessary insights for further analysis.

Stage 2. Sensor data aggregation systems and analog-to-digital data conversion

Even though this stage of IoE architecture still means working in a close proximity with sensors and actuators, Internet getaways and data acquisition systems (DAS) appear here too. Specifically, the later connect to the sensor network and aggregate output, while Internet getaways work through Wi-Fi, wired LANs and perform further processing.

The vital importance of this stage is to process the enormous amount of information collected on the previous stage and squeeze it to the optimal size for further analysis. Besides, the necessary conversion in terms of timing and structure happens here.

In short, Stage 2 makes data both digitalized and aggregated.

Stage 3. The appearance of edge IT systems

During this moment among the stages of IoE architecture, the prepared data is transferred to the IT world. In particular, edge IT systems perform enhanced analytics and pre-processing here. For example, it refers to machine learning and visualization technologies. At the same time, some additional processing may happen here, prior to the stage of entering the data center.

Likewise, Stage 3 is closely linked to the previous phases in the building of an architecture of IoE. Because of this, the location of edge IT systems is close to the one where sensors and actuators are situated, creating a wiring closet. At the same time, the residing in remote offices is also possible.

Stage 4. Analysis, management, and storage of data

The main processes on the last stage of IoE architecture happen in data center or cloud. Precisely, it enables in-depth processing, along with a follow-up revision for feedback. Here, the skills of both IT and OT (operational technology) professionals are needed. In other words, the phase already includes the analytical skills of the highest rank, both in digital and human worlds. Therefore, the data from other sources may be included here to ensure an in-depth analysis.

After meeting all the quality standards and requirements, the information is brought back to the physical world — but in a processed and precisely analyzed appearance already.

Stage 5 of IoE Architecture?

In fact, there is an option to extend the process of building a sustainable IoE architecture by introducing an extra stage in it. It refers to initiating a user’s control over the structure — if only your result doesn’t include full automation, of course. The main tasks here are visualization and management. After including Stage 5, the system turns into a circle where a user sends commands to sensors/actuators (Stage 1) to perform some actions.

 

 

 

 

 

 

Platforms, Enablers, and Accelerators

PLATFORMS

IoE Cloud Platforms

Before we get to the main point and list 10 best IoT cloud platforms, we first need to go back to the basics.

You might be a developer, a startup co-founder, or a business manager, and wondering how you can benefit from the coming revolution of the ‘Internet of Everythings’ (IoE).

At the moment, the internet is run mainly by humans. The majority of communication, messages, and data is happening between people, through desktops, laptops, and smartphones. This is changing. A whole new category of devices is starting to take over the internet. These devices aren’t run by people and don’t send messages to people either. They are machines that talk to other machines, and they’ve been given the simple name ‘Things’.

As these devices start to become connected, we need a place to send, store, and process all of the information. Setting up your own in-house system isn’t practical anymore. The cost of maintaining, upgrading and securing a system is just too high, and there are some great services available.

There are already plenty of cloud services available for your personal information. any of these companies are trying to position themselves as leaders in the Internet of Things revolution. Over the next 5-10 years, we will witness a bloody battle for market share.

So which is the best IoT software? As with every IT system or service, different IoT solutions have their advantages and disadvantages. Here is a comparison of the major players, to get an idea of what is on offer at the moment.

  1. Amazon Web Services IoT Platform

Amazon dominates the consumer cloud market. They were the first to really turn cloud computing into a commodity way back in 2004. Since then they’ve put a lot effort into innovation and building features, and probably have the most comprehensive set of tools available

2.     Microsoft Azure IoE Hub

Microsoft is taking its Internet of Things cloud services very seriously. They have cloud storage, machine learning, and IoE services, and have even developed their own operating system for IoE devices. This means they intend to provide a complete IoE solution provider.

  1. IBM Watson IoE Platform

 

BM is another IT giant trying to set itself up as an Internet of Things platform authority. They try to make their cloud services as accessible as possible to beginners with easy apps and interfaces. You can try out their sample apps to   get a feel for how it all works. You can also store your data for a specified period, to get historical information from your connected devices.

 

4.     Google Cloud Platform

Search giant Google is also taking the Internet of Things very seriously. They claim that “Cloud Platform is the best place to build IoT initiatives, taking advantage of Google’s heritage of web-scale processing, analytics, and machine intelligence”.

  1. Oracle

 

Oracle is a platform as a service provider that seems to be focusing on manufacturing and logistics operations. They want to help you get your products to market faster.

6.     Salesforce

Salesforce specializes in customer relations management. Their cloud platform is powered by Thunder, which is focused on high speed, real-time decision making in the cloud. The idea is to create more meaningful customer interactions. Their easy point-and-click UI is designed to connect you with your customers.

7.     Bosch

Bosch is a German-based company IT company, who have recently launched their own cloud IoT services to compete with the likes of Amazon. They focus on security and efficiency. Their IoT platform is flexible and based on open standards and open source.

  1. Cisco IoT Cloud Connect

 

Cisco is a global leader in IT services, helping companies “seize the opportunities of tomorrow”. They strongly believe that the opportunities of tomorrow lie in the cloud, and have developed a new ‘mobility-cloud-based software suite’.

 

  1. General Electrics Predix

 

General Electric have decided to get into the platform-as-a-service game. They are focused on the industrial market by offering connectivity and analytics at scale for mainstream sectors like aviation

 

  1. SAP

 

The SAP homepage reads like a buzzword dictionary for the last couple of years. Here the title of a press release: “SAP Cloud Platform extends its capabilities for IoT, Big Data, Machine Learning, and Artificial Intelligence”.

Which One Should You Go With?

This is a tough question. There is no best IoE cloud platform, and ultimately it will depend on the specific needs of your business. At the moment Amazon is the most established in this field but could be expensive.

If you just want to test out some ideas, go with a provider that offers a free tier. You’ll be able to get a feel for how it works, the pros and cons, and what features you might need in the future.

If you still aren’t sure, contact a development team that has experience in building the type of systems you need and implementing AI and data engineering solutions. They will know the ins-and-outs of every platform, and will easily be able to recommend the perfect IoE cloud platform to take your business to the next level.

 

ENABLERS

Three enablers for IoE. … The Internet of Things (IoT) consists of everyday objects – physical devices, vehicles, buildings, wearable technology etc. with embedded electronics, software, sensors and network connectivity, allowing them to collect, send and receive data.

There are a number of key enablers that enterprises should focus on, when developing their IoT ecosystems. These are briefly discussed below.

  • Enabling platforms: as mentioned above, platforms are the foundation of the ecosystem. Businesses need to deploy IoT platforms that fulfil the expectations of both customers and partners in terms of functionality, reliability, security and flexibility. The platform needs to enable not only vertical solutions, but a true ecosystem in the form of a marketplace for IoT products and services.

 

  • APIs: APIs are the basic building blocks of an IoT ecosystem, and businesses must therefore develop a strong API strategy. This strategy should be based on a deep understanding of the IoT markets that the business intends to target. Designing and supporting APIs for everyone is impractical, which means that a focused approach is recommended. The business should also develop an API roadmap that is in line with its overall IoT strategy, while the API pricing and support model must be aligned with the business’ ecosystem revenue model. APIs can ultimately foster – or discourage – network effects. If using your APIs is too onerous or does not create sufficient value, ecosystem partners will be reluctant to invest time or effort. It is therefore vital that businesses define their API strategies with market and partner needs in mind.

 

 

  • Communities: for ecosystems to be true ecosystems, communities of partners need to exist. These partners should be able to develop products and services based on the company resources (via APIs), as well as those of other ecosystem participants. The benefits to businesses can be immense. By enabling others to invest and create new products and services, the business is able to boost innovation. This is achieved without incurring every cost and risk involved, but by sharing these with the ecosystem partners. Companies like IBM, Amazon and Microsoft are very active in this area sponsoring hackathons and sponsoring university research programs and incubators.

 

  • Own branded services: in many cases, it makes sense for businesses to offer complete IoT solutions, either with their own products or through integration with partners. This to signal commitment to market and to kick-start the ecosystem expansion. A good example is Digital Life from AT&T, a telco in the US – the company has developed an integrated home monitoring service together with partners, and markets the service as an AT&T-branded product. This branded service serves to signal AT&T’s commitment to the IoT and, as the service establishes itself in the market, AT&T is looking at opening it to a wider array of partners, thus further developing the initial ecosystem.

 

  • Revenue models: revenue models are a key aspect for the successful development of IoT ecosystems. Businesses looking to attract ecosystem partners need to define the right revenue generation and sharing model – one that incentivizes partners to join the ecosystem, reduces risks for partners to innovate and fits with the business model of the individual partners. Some partners will be attracted to a revenue sharing model, while others will prefer a licencing or fixed royalty-based model. Models like “freemium” can be good to encourage experimentation and early adoption in IoT communities. This means that firms will need to support several revenue and partnership models, which in turn will require new decision and management systems.

 

  • Ecosystem support functions: the final (and perhaps most overlooked) enabler is the internal organization and the related support functions. A critical function here is partner management, which not only means being able to recruit but to incentivize and support ecosystem partners throughout the partnership lifecycle. This is a capability that goes beyond basic reseller agreements. Businesses will also require dedicated teams to support the ecosystem. This support includes technical (e.g. how to use an API) but also marketing (e.g. sell your apps on our marketplace) and operational (e.g. “fulfilled by Amazon”).

Moreover, a governance model that establishes clear ‘ecosystem rules’ is critical in order to maintain harmony among members and a healthy cooperative ecosystem.

ACCELERATORS

More and more early-stage startups are seeking accelerators to help them quickly identify their best growth strategy and launch them with a plan to achieve it. Over the past few years, many new accelerators have appeared that target specific types of startup and deliver a more rewarding experience. Three distinct types of accelerators have emerged. One type aligns with startups targeting specific markets (“vertical accelerators”). A second type aligns with startups targeting specific technologies and products (“horizontal accelerators”).  Accelerators operated by specific companies (“corporate accelerators”) make up a third type. These focused accelerators develop a strong program around their theme, build ties to a theme-aligned mentor network, and solidify relationships with investors actively engaged in the accelerator’s theme. The resulting accelerator ecosystem is better able to assist these specific types of startups, optimizing the probability of success for the startup and the accelerator.

Introduction

Startup accelerators have a significant positive impact on early stage startups helping guide them to a successful launch. During the past few years, the number of new accelerators has increased. This growth has been fueled not only by startups looking to leverage expert guidance but also by eager angel investors and new venture fund managers seeking access to potentially lucrative (but risky) investment options.

Accelerators vs. Incubators

Both accelerators and incubators help young firms grow by providing guidance, but in different ways. Business incubators typically engage startup companies as tenants in their workspace for varying lengths of time. They are often associated with universities and provide startups with easy access to services such as accounting and legal. Because of their construct, incubators are known to be inefficient, typically spending their modest funds on infrastructure expenses rather than to directly impact the growth of the startups. In these structures, they also lack expedience as startups have a tendency of remaining for a long time to sort out their businesses. While they may exhibit some sign of success, it seems that often these incubators end up carrying the dead weight of under-performing companies that can only survive within the incubator. And improvement does not appear to be imminent.

Startup accelerators, on the other hand, take a selected group of startups (a “cohort”) through a structured program over a specific period, usually 3-6 months. Accelerators make a small seed investment in their startups in exchange for a small amount of equity. This initial investment is frequently sufficient to cause founders to dedicate all their time to their startups. Once engaged by the accelerator, the startups are given access to a large mentor network composed of entrepreneurs, alumni, and outside investors. The contacts and introductions may be the accelerator’s biggest value for prospective startups. During the accelerator program, the startups go through intensive mentorship and may participate in periodic educational seminars. The goal of the accelerator is rapid growth, to sort out all organizational, operational, and strategic difficulties that might be facing their business and ultimately to get the companies funded with seed financing. The program normally culminates in a public or semi-private pitch event, “Demo Day”, during which the startups present their business plans to candidate investors.

Impact of an accelerator on the trajectory of startups

What’s the expected impact of an accelerator on the trajectory of various startups? Consider three example startups listed with Companies A, B and C in the top accompanying chart. The startup indicated by Company A shows the initial progress that ultimately doesn’t succeed for whatever reason, ending in failure. On the other hand, the startup indicated by Company B also shows initial progress but stagnates over time. Their rate of progress slows nearly to zero, presumably less success than what could be realized doing something else. Of course, an accelerator isn’t always required for success, and Company C indicates the trajectory of the successful startup.

Now consider the effect of a well-designed accelerator with a period of involvement shown in the lower chart as the yellow-shaded region. The trajectory for each of the three sample companies improves. Company A shows that the company that could not previously succeed still fails. In this example, the accelerator identified failure sooner. Understanding the cause of the failure, by significantly revising their idea, this company was able to pivot and realize a path to success. Similarly, Company B shows a company on a path to stagnation, yet pivots to a new idea and a path to success. Even the trajectory for Company C results in a steeper success trajectory. In these cases, for the defined accelerator period, hands-on mentoring, and an uncompromising ecosystem, hard decisions could be made sooner rather than later positively impacting the trajectories of each company. With acceleration, companies don’t waste time and money with a business plan that doesn’t work. In failing sooner, they learn a lot about how to be a successful company.

Accelerators get Focused – three types emerge

Early Accelerators

Startup accelerators aren’t new. Y Combinator from Silicon Valley is known to be the first successful accelerator program launched almost a decade ago. The program spawned many similar programs around the country that had varying degrees of success. Shortly thereafter, Techstars (Boulder, CO) launched a similar program. While Y Combinator chose not to pursue replicating its early successes and scaled back its programs in 2012, Techstars duplicated its program in regions that were rich with startups. Today, they have expanded to over 18 locations in the U.S. and Europe, including nine city programs including Austin, Berlin, Boston, Chicago, London, New York City and Seattle. Today, Techstars is likely the world’s largest accelerator organization.

While hundreds of accelerators exist, the trend bringing more of these types may be slowing (“The Startup Accelerator Trend is Finally Slowing Down”, TechCrunch 11/2013). In some areas, new accelerators operate for 1-3 years and then disappear while the few top legacy accelerators continue to dominate. These early accelerators have benefited from the cumulative effect of several successful programs that cause the investor pool to steadily increase and the network of mentors, alumni and educators to deepen. Many of these early accelerators remain strong even today, viewing a diversity of startups in a variety of areas. In light of this growing strength, new accelerators often struggle to attract the best startups and may often attract less mature companies into their cohorts. In an environment where only one-quarter of a cohort’s startups finds follow-on investors, this can have a devastating impact on success for those new accelerators.

Over the past few years, new “focused” accelerators have appeared that target specific types of startup and deliver a more rewarding experience. In general, there are three distinct types of accelerators. One type aligns themselves with startups targeting specific markets (“vertical accelerators”). A second type aligns themselves with startups targeting specific technologies/product-types (“horizontal accelerators”).  A third type is operated directly or indirectly by specific companies (“corporate accelerators”). Of course, accelerators aren’t necessarily exclusively one type and may have different focus characteristics. When these focused accelerators develop a strong program around their theme, build a strong, aligned mentor network, and solidify rewarding relationships with active investors, both the accelerator and the cohorts benefit.

Vertical Accelerators

In large metropolitan areas, many new vertically-themed accelerators have appeared. The “vertical” theme refers to a specific industry niche where a diversity of companies markets products and services to the same group. For accelerators, the vertical theme is chosen to leverage the unique strengths of the regional investor community in that specific vertical market and to build a mentor network around it. Metropolitan areas are often thick with commerce, finance, media, culture, art, fashion, research, education, and entertainment. As such, there are plenty of large and diverse groups actively involved in managing and supporting companies in those large vertical markets. Also, there are lots of investors with a diversity of funds indexed for those markets. For new accelerators, the vertical themes are so fundamentally embedded in the community that excellent resources are available on which to fund classes of cohorts and to grow their mentor and investor networks.

Of the vertically themed accelerators, the most common themes include financial technology (FinTech), health and education technology (EdTech), energy, media, real estate and fashion. Accelerators also exist in diverse vertical markets including hospitality, non-profit, film, and food. It remains to be seen how these will survive over time and if they will pivot with market evolution.

Horizontal Accelerators

Also popular are horizontal accelerators. The “horizontal” theme refers to accelerators focused on startups that intend to develop a product or service that meets a specific need for customers across different market niches. Similar to vertical accelerators, the horizontal theme is chosen to leverage the unique strengths of the regional investor community that have an interest in that specific type of product and to build a mentor network around that.

Of the horizontally-themed accelerators, many themes exist including Internet of Things (IoT), Cloud, Hardware, Software as a Service (SaaS), Mobile technology, Internet and Enterprise products. Work-Bench is a horizontal accelerator located in NYC with enterprise software as its area of focus. Outside the US, Cisco, Intel, and Deutsche Telekom have partnered to create Challenge-Up! a horizontal accelerator to help IoT/IoE startups go-to-market faster.

Corporate Accelerators

Some corporations have also sponsored startup accelerators with many of the same design elements as vertical and horizontal accelerators. These so-called corporate accelerators either target specific industries (vertical markets) or technology areas (horizontal markets) of specific interest to those companies. Techstars launched nine vertical programs in various locations partnering with Fortune 500 companies to run corporate accelerator programs. For example, the “Techstars powered” Disney Accelerator is aligned with Disney’s philosophy to help technology innovators “turn their dreams for new media and entertainment experiences into reality”. Similarly, the “Techstars powered” R/GA Accelerator located in NYC leverages R/GA’s expertise around innovation, technology and design and targets startups that concentrate on hardware and IoT. Media camp is an accelerator initiative by Turner/Warner Bros. focused on technology that’s relevant to the media and entertainment industry.

Focusing on the mobile market, Samsung Accelerators have been launched in two US cities, one in San Francisco and the other in New York. These accelerators are intended to tap into local talent and startup to innovate in software and services, areas where Samsung sees tough competition from the mobile market and competitors. Also in the mobile market, Techstars powers a Sprint Mobile Accelerator located near their Missouri headquarters. This accelerator guides startups focused on certain mobile products including wearables, mobile applications, and enterprise solutions, as well as the verticals education, gaming, entertainment, health, security and government.

The Accelerator’s Ecosystem – where the action is

Designing the Ecosystem

Once the vertical theme is defined, the accelerator team develops its ecosystem based on that theme. The ecosystem describes all the components of the accelerator into which the startups are immersed, including mentors, alumni, investors, the accelerator’s team as well as its sponsors.

While the goals for each accelerator vary, in general, the following three are most common:

  • Train the startup’s founders with entrepreneurial best practices by sharing the perspective and experience of mentors and alumni.
  • Help the founders thoroughly evaluate their business strategy and identify any pivots that may be beneficial to their success, refining their pitch to reflect this strategy.
  • Provide exposure to meet members of the investment community including angel investors and VCs.

The program’s success comes from the alignment of the accelerator’s objectives to support the theme with all the aspects of the program. The best programs align all parties with strong team leadership and excellent management of startups and associates. The team wants to assure that mentors have a stake in the success of the startups they advise and provide others with the proper vision and incentives to help. Over time, the program improves as the accelerator team iterates the program and ecosystem. Improvements result from a culling of the network of mentors and alumni, engaging visionary educators from local university entrepreneur programs, getting the support of real sponsors and, of course, selecting the most appropriate investors.

Selection of startups and location

The selection of startups is another important factor in optimizing the success. Which startups participate in each cohort is critical to assure synergy amongst startups. Proper selection assures alignment of the startups with the theme for the accelerator and the ecosystem that supports it. Also, startup founders invest significant amounts of their time as well as opportunity cost. They certainly want to get the most from their experience and are well-served by thoroughly researching the program, its management team, the mentor network and the types of investors to expect they will meet.

Are certain types of startup companies better located at one accelerator location versus another? Certainly for vertical accelerators, the selection of startups is obvious. But even for legacy startups, there is some impact in the selection due to the location of the accelerator. After all, one would expect that a startup that seeks to leverage open space and environmental markets would have a higher probability of a successful experience in an accelerator located outside a large metropolitan area. Perhaps a startup offering a clever seafood delivery service be more successful in a New England accelerator. While these may be clichés, the point is clear: different regions offer different characteristics. Clearly, startup companies not aligned with their requested location should seek an ecosystem located in a different location that better optimizes their growth.

The mentor network

It’s important for startups to select an accelerator that engages the most appropriate mentors for their startup’s profile. Also, accelerators can maximize their success by assuring that their startups receive excellent advice from the mentors and alumni as well as the other startups in the cohort.

In selecting the best accelerator, startups need to review mentor profiles carefully to confirm alignment between their needs and the skills and experience available. For example, startup founders may want to review each mentor’s knowledge of funding, marketing, design, accounting, and intellectual property. There are different classifications for participants in a mentor network. Creating a list like the one below may help to identify them.

Example of Mentor Classifications:

  • Serial Entrepreneur – Those that have founded at least two companies, one for at least five years.
  • Entrepreneur –Founders of a startup company.
  • Startup Participant – Those that have been primary contributors to the growth of a startup (but not a founder).
  • Professionals – Those employees that have a significant role in a company that gives them insights into markets, opportunities, and gaps.
  • Investor – Someone that is primarily an angel investor, or a member of a venture capital firm.

Startups need to make sure that the mentor profiles (categorized in a suitable way to highlight necessary skills and background) are the best available source for advice for their startup.

Summary

Startups have new and improving opportunities to benefit from participating in startup accelerators. Many new “focused” accelerators are well aligned with certain specific themes. The themes may be ideal for certain types of startups intending to target specific markets (“vertical accelerators”) or specific technologies/product-types (“horizontal accelerators”).  Alternatively, “corporate accelerators” operated by specific companies may also be a viable alternative. The focused accelerators leverage prevalent regional market interests including the source for funding the accelerator, the mentor network that supports it and the target investor community to create a perfect ecosystem for specific startups. By synergizing investors and mentors with their program, the focused accelerators can create an ideal ecosystem for startups aligned with their theme.

 

 

 

 

 

 

 

 

 

 

 

 

 

As traditional analytics and the data it is meant to analyze grows more dynamic and complex, the role of analytics as we know it is shifting from a tool that drives decision making by delivering data insights, to a role that is driving business processes via both recommending the most appropriate actions to take regarding issue resolution as well as triggering actions to resolve issues in an automatic fashion.

 

With technology advancing at such a breakneck pace in order to make the lives of IT Ops simpler and more efficient, cloud environments are only getting more complex, and will continue to do so, as data streams from an increasing number of sources with the explosion of IoT, relational databases, CRM and Application log data just to name a few. I’m talking about lots of data, both structured and unstructured. Monitoring in real time is critical in order for IoE tools to alert IT immediately so as to minimize system downtime and/or support anomaly detection.

 

The ability to monitor from end to end in real time is paramount so at the end, humans are positioned to make proactive critical decisions. The core of IoE platforms is that you have machines taking over a huge chunk of the repetitive work that sometimes takes data teams hours. This takes the burden out of human hands and puts the humans in control at the end, when the most vital decisions can make or break an organization’s IT operation.

 

Before you go out and make the decision to purchase an IoE platform, you should ask yourself these 4 questions:

 

  1. Does the platform monitor and detect IT issues in real time? 

The area surrounding the term real time is a bit cloudy (no pun intended) , and true real time refers to the updates or frequency of retrievals of data points in order to present new information where it feels instantaneous. Universal standards put this time at a second. This translates to the time between when a data point is introduced into the monitoring systems and the creation of that data point (alert, event, metric, etc…) This timing should be one second or less.

 

  1. Does the platform have the ability to analyze historical data?

While ITOA focuses on historic data, many IoE platforms provide the ability to ingest the plethora of historic data in addition to real time data. You want a platform that has the ability to harness the power that previous customer data provides along with other sources and provide you with data-driven insights that will inform the organization on the best path to resolution.

 

  1. How soon can I realize value from an IoE platform?

You’d like to purchase a platform that uses the right mix of algorithms and methods that leaves no environmental stone unturned in uncovering any anomalies for example. Some of this mix may require longer term analysis, but a large number of these anomalies should be detectable within a short time period. Detection quality should improve over time as you feed data that will empower learning and the system can analyze longer time periods. Feeding the platform historical data is a big part of giving the system what it needs to learn so it can make the most appropriate and quickest decisions.

 

  1. Will the platform barrage my IT staff with alerts?

You want to implement a platform that will intelligently reduce the number of incidents which is the root cause of alerts. A platform that utilizes an optimal trigger for alerts should use machine learning to prioritize incidents based on crowd sourced feedback and the number of incidents the team receiving the alerts is able to handle. This negates a lot of the “boy who cried wolf” alerts that lead to alert overload.

 

These are just some of the critical questions that come to mind when selecting an IoE platform. You really need to ask yourself these as well as any others you feel are most relevant to your business. Ask the vendor, then ask their customers for a more complete perspective and build your short list from there so you may dive deeper. Feel free to chime in below with any questions you feel businesses should ask when looking to purchase an IoE platform.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Testing

 

Testing and Debugging

What is CANopen protocol?

 

CANopen is a high-level communication protocol and device profile specification that is based on the CAN (Controller Area Network)protocol. The protocol was developed for embedded networking applications, such as in-vehicle networks.

 

What is the difference between CANopen and j1939?

 

J1939 protocol is used for communicating b/w nodes as well as for diagnostics whereas CAN is used for communicating & doesn’t have its own Diagnostic Protocol because of which UDS & KWP protocols were used over CAN.

 

CAN bus protocols?

 

A Controller Area Network (CAN bus) is a vehicle bus standard designed to allow microcontrollers and devices to communicate with each other in applications without a host computer.

 

 

 

 

What is j1587 protocol?

 

J1587 is an automotive diagnostic protocol standard developed by the Society of Automotive Engineers (SAE) for heavy-duty and most medium-duty vehicles built after 1985. The J1587 protocol uses different diagnostic connectors. Up to 1995, individual OEMs used their own connectors.

 

What is FlexRay protocol?

 

FlexRay is an automotive network communications protocol developed by the FlexRay Consortium to govern on-board automotive computing. It is designed to be faster and more reliable than CAN and TTP, but it is also more expensive.

 

 

CANopen Introduction

CANopen is a high-level communication protocol and device profile specification that is based on the CAN (Controller Area Network)protocol. The protocol was developed for embedded networking applications, such as in-vehicle networks. The CANopen umbrella covers a network programming framework, device descriptions, interface definitions and application profiles. CANopen provides a protocol which standardizes communication between devices and applications from different manufacturers. It has been used in a wide range of industries, with highlights in automation and motion applications.

In terms of the OSI communication systems model, CAN covers the first two levels: the physical layer and the data link layer. The physical layer defines the lines used, voltages, high-speed nature, etc. The data link layer includes the fact that CAN is a frame-based (messages) protocol. CANopen covers the top five layers: network (addressing, routing), transport (end-to-end reliability), session (synchronization), presentation (data encoded in standard way, data representation) and application. The application layer describes how to configure, transfer and synchronize CANopen devices. The concepts of the application layer, covered in specification CiA DS 301. The intention is to give users a brief overview of the concepts of CANopen.

                                 CAN and CANopen in the OSI Model

Debugging

Debugging is a cyclic activity involving execution testing and code correction. The testing that is done during debugging has a different aim than final module testing. Final module testing aims to demonstrate correctness, whereas testing during debugging is primarily aimed at locating errors.

What is testing and debugging?

 

Testing is a process of finding bugs or errors in a software product that is done manually by tester or can be automated. Debugging is a process of fixing the bugs found in testing phase. Programmer or developer is responsible for debugging and it can’t be automated.

 

 

 

 

 

What do u mean by debugging?

 

Debugging is the routine process of locating and removing computer program bugs, errors or abnormalities, which is methodically handled by software programmers via debugging tools. Debugging checks, detects and corrects errors or bugs to allow proper program operation according to set specifications.

 

Why is debugging needed?

 

In software development, debugging involves locating and correcting code errors in a computer program. Debugging is part of the software testing process and is an integral part of the entire software development lifecycle.

The commonly-used debugging strategies are debugging by brute force, induction strategy, deduction strategy, backtracking strategy, and debugging by testing. Brute force method of debugging is the most commonly used but least efficient method.

 

IoE Testing Processes and Open Source Software Tools

  • IoE The IoEdevices are equipped with IP Addresses and have the ability to transmit data over the network. …
  • Pilot testing. …
  • Cross-domain compatibility testing. …

 

 

 

 

What is IOE testing?

 

  • IOE testingis a type of testing to check IOT Today there is increasing need to deliver better and faster services. There is a huge demand to access, create, use and share data from any device. The thrust is to provide greater insight and control, over various interconnected IOE devices.

 

  1. Selenium

Selenium is a testing framework to perform web application testing across various browsers and platforms like Windows, Mac, and Linux. Selenium helps the testers to write tests in various programming languages like Java, PHP, C#, Python, Groovy, Ruby, and Perl. It offers record and playback features to write tests without learning Selenium IDE.

Selenium proudly supports some of the largest, yet well-known browser vendors who make sure they have Selenium as a native part of their browser. Selenium is undoubtedly the base for most of the other software testing tools in general.

  1. TestingWhiz

TestingWhiz is a test automation tool with the code-less scripting by Cygnet Infotech, a CMMi Level 3 IT solutions provider. TestingWhiz tool’s Enterprise edition offers a complete package of various automated testing solutions like web testing, software testing, database testing, API testing, mobile app testing, regression test suite maintenance, optimization, and automation, and cross-browser testing.

TestingWhiz offers various important features like:

  • Keyword-driven, data-driven testing, and distributed testing
  • Browser Extension Testing
  • Object Eye Internal Recorder
  • SMTP Integration
  • Integration with bug tracking tools like Jira, Mantis, TFS and FogBugz
  • Integration with test management tools like HP Quality Center, Zephyr, TestRail, and Microsoft VSTS
  • Centralized Object Repository
  • Version Control System Integration
  • Customized Recording Rule

3.HPE Unified Functional Testing (HP – UFT formerly QTP)

HP QuickTest Professional was renamed to HPE Unified Functional Testing. HPE UFT offers testing automation for functional and regression testing for software applications.

Visual Basic Scripting Edition scripting language is used by this tool to register the test processes and operates the various objects and controls in testing the applications.

Visual Basic Scripting Edition scripting language is used by this tool to register the test processes and operates the various objects and controls in testing the applications.

QTP offers various features like:

  • Integration with Mercury Business Process Testing and Mercury Quality Center
  • Unique Smart Object Recognition
  • Error handling mechanism
  • Creation of parameters for objects, checkpoints, and data-driven tables
  • Automated documentation

 

 

 

 

  1. TestComplete

TestComplete is a functional testing platform that offers various solutions to automate testing for desktop, web, and mobile applications by SmartBear Software.

TestComplete offers the following features:

  • GUI testing
  • Scripting Language Support – JavaScript, Python, VBScript, JScript, DelphiScript, C++Script & C#Script
  • Test visualizer
  • Scripted testing
  • Test recording and playback

5. Ranorex

Ranorex Studio offers various testing automation tools that cover testing all desktop, web, and mobile applications.

Ranorex offers the following features:

  • GUI recognition
  • Reusable test codes
  • Bug detection
  • Integration with various tools
  • Record and playback

Learn more about Ranorex.

  1. Sahi

Sahi is a testing automation tool to automate web applications testing. The open-source Sahi is written in Java and JavaScript programming languages.

Sahi provides the following features:

  • Performs multi-browser testing
  • Supports ExtJS, ZK, Dojo, YUI, etc. frameworks
  • Record and playback on the browser testing

7. Watir

Watir is an open-source testing tool made up of Ruby libraries to automate web application testing. It is pronounced as “water.”

Watir offers the following features:

  • Tests any language-based web application
  • Cross-browser testing
  • Compatible with business-driven development tools like RSpec, Cucumber, and Test/Unit
  • Tests web page’s buttons, forms, links, and their responses

8. Tosca Testsuite

Tosca Testsuite by Tricentis uses model-based test automation to automate software testing.

Tosca Testsuite comes with the following capabilities:

  • Plan and design test case
  • Test data provisioning
  • Service virtualization network
  • Tests mobile apps
  • Integration management
  • Risk coverage

9. Telerik TestStudio

Telerik TestStudio offers one solution to automate desktop, web, and mobile application testing including UI, load, and performance testing.

Telerik TestStudio offers various compatibilities like:

  • Support of programming languages like HTML, AJAX, ASP.NET, JavaScript, Silverlight, WPF, and MVC
  • Integration with Visual Basic Studio 2010 and 2012
  • Record and playback
  • Cross-browser testing
  • Manual testing
  • Integration with bug tracking tools

10. Katalon Studio

Katalon Studio is a free automation testing solution developed by Katalon LLC. The software is built on top of the open-source automation frameworks Selenium, Appium with a specialized IDE interface for API, web and mobile testing. This tool includes a full package of powerful features that help overcome common challenges in web UI test automation.

Katalon Studio consists of the following features:

  • Built-in object repository, XPath, object re-identification
  • Supports Java/Groovy scripting languages
  • Built-in support for Image-based testing
  • Support Continuous Integration tools like Jenkins & TeamCity
  • Supports Duel-editor Interface
  • Customizable execution workflow

 

Physical layer testing

The physical layer testing can be done at customers site, in-house or at customers end-customer site. When testing is done, a comprehensive report is generated which always presents also the suggestion for correction. The report layout can also be fitted suiting the customers own report layout in mind.

Physical level testing can be used to test either part of the CAN-network, the entire network/system or just a single node can be analyzed throughout into nodes HW/Electronics.

With physical level testing you can find following errors:

  • Cabling
  • Termination
  • Voltage peaks
  • Grounding errors
  • EMC-problems
  • Topology-errors

and more…

CANopen application layer testing

To test the CANopen functionality of your CANopen device/node or your CANopen network you need to define the test specification, develop test cases (either manual or automatic), implement the actual test and generate a test report which includes the test result and suggestions for correction. Part of the testing:

  • Un-official CiA 301 conformance testing
  • CiA 401 device profile for generic I/O module testing
  • CiA 410 device profile for inclinometer testing
  • Layer setting service (LSS) functionality testing
  • CANopen performance testing

and more…