The idea behind Service-Flow
We, the founders of Service-Flow, have a multi-decade experience in designing and implementing service management tools, implementations and integrations between them. Even though it was a good software and consulting business, the end result in most cases did not bring customer the value that they were looking for. This was mostly due to the combination of technology limitations, project team's overall competence of the topic and lack of budget (the cost of doing things properly). In 2011 we decided that "there has to be a better way". We decided that even though we would cannibalize our own business, at the end of the day, the value would be far greater than continuing on the traditional path. So we started to build a cloud service, that would eliminate the need of having the technical knowledge about integrations and APIs, and still being able to implement and maintain integrations. That way we could bring the integration project implementation and maintenance as close as possible to the actual stakeholders; the process owners. People who know how the collaboration between the integrated parties should work.
Since then we have been fortunate enough to find exceptional customers and partners, that can call themselves early adopters. We've together developed world's first true cloud, highly available integration hub, that has made possible the kinds of multi vendor integrations that earlier architects woke up screaming in the night. And not only to implement, but maintain and develop further as new needs arise.
The idea and approach is somewhat unique, so we recommend that you take a step backwards and try to look at the big picture. After that we would like you to seriously think what are the things you want to focus on, and what would you rather leave in the hands of a seasoned service provider. Naturally one can do anything, but do you really want to? That way you find yourself being responsible of that everything as well.
The Service-Flow way
Service-Flow integration service
Service-Flow is a multi-tenant, cloud-based integration service. The service and all of its components and services are fully managed and controlled by Service-Flow and thus there's no need for end-users to set-up any technical management procedures.
Service-Flow is hosted within the EU in a data center that has successfully passed multiple security certifications, e.g. SAS70 Type II, ISO 27001 and PCI DSS Level 1.
Service-Flow is made highly available and fault-tolerant by building the system as an asynchronous, staged event-driven architecture (SEDA) where components communicate by transmitting messages and responding to events. Communication is further enhanced by persistent queues, which makes the system fault-tolerant as the receiver doesn't even need to be running for the sender to send a message.
24/7 monitoring is used to detect problems as they occur and to alert the Service-Flow Operations Team. Service-Flow database runs in a replicated setting where a crash of a single DB server doesn't bring the whole system down. The replication is done between separate data centers.
Service-Flow maintains the service in real-time, i.e. without any customer-visible down-time.
Following chapters describe the basic concepts of the overall integration service.
Adapter is the technical component that handles all inbound and outbound communication with a single tool connected to Service-Flow in an integration. Adapters can handle all communication methods like SOAP, REST, SFTP, etc. Adapters are also capable to use different authentication methods, attachment handling procedures and specific tool related features and limitations. For inbound traffic, there is a possibility to listen for web service calls from the connected tool (push), or poll changes incrementally from an interface (pull).
There are different kinds of adapters. There are tool specific adapters (e.g. ServiceNow, JIRA, Zendesk), that vary depending on a different integration features of the tool. For example ServiceNow has a variety of different options to use, when JIRA and Zendesk are relatively fixed/standard. More generic options are used when connecting a custom developed tool or bus to Service-Flow. In both cases, all different features of the service are available to utilize.
Adapters are always part of the subscription and are provided and maintained by Service-Flow. All needed new features are always developed by Service-Flow without any development cost to any customer. Having said that, users are able to configure different variables of the adapter in use to fulfill the particular use they want. Basic principle when creating an adapter is, that there is no need for any Service-Flow specific configuration on the connected tools side, Service-Flow has all the means to adapt to the ways of the connected tools.
Endpoint is a logical implementation of an adapter (e.g. connection to the connected tool). Used entity- and message types, authentication credentials and other needed variables are defined in the endpoint.
Entity types define the entities that are integrated from and to the connected tool. There can be multiple entity types used within an endpoint. Typical entity types in an integration are incident, request, change or cmdb-ci.
Message types can be used to configure specific integration messages related to an entity type. For example, incident entity type could have message types of CreateIncident, UpdateIncident and CloseIncident which each could have a bit different fields in the messages. Message types can also be used in cases where the connected tool's API has different HTTP methods for different operations. It is typical that for example POST method is used then the ticket is created to the target system, but PUT is used when that ticket is later updated.
The broker is the brain of the service. All the needed tools to map, manipulate and route different messages reside here. Broker also stores the integration conversations between related parties. Adapters produce the data to the broker in a canonical format, which means that there is no fixed datamodel that the integrations need to obey. After handling the received message, broker pushes the canonical message to the target endpoint for outbound message processing, where the message is finally transformed to the tool-specific format and sent.
Routing rule is the configuration object that defines what is done to a message received from a source endpoint. The rule defines how the message data is mapped and how the message is enriched before it is sent to the target endpoint, to be relayed to the connected tool as the entity and message type described in the rule.
There can be multiple rules that are triggered with a single message received from an endpoint. This means that a single message can be routed to multiple endpoints, and onwards to a connected tool.
A single rule can also trigger out multiple messages to an endpoint.
In the routing rule, there is also a possibility to store information to the ticket conversation (as conversation attributes) for a later use. Examples when these are used are e.g. when there is a need to know if the ticket has been earlier resolved, or what is the status of the target ticket (making it possible to set right JIRA transitions).
Ticket conversation is a context where all messages and routing information for a single integrated entity (e.g. an incident) is stored. For example, in integration between ticketing systems the conversation has information of all the messages passed between systems for a single ticket. Conversation makes it much easier to understand what has happened and in what stage the integration is in. There is also a possibility to store needed data to the conversation to be used later on in the ticket lifecycle.
Connected tool (Standard connection)
Customer systems are (ITSM) tools connected to one another through the Service-Flow solution. These tools are usually located either inside customer internal networks or in the cloud as SaaS applications. The connection to the tool is designed to be as standard as possible. The details of this connection is always ultimately up to the policies and design of the tool in question.