Oracle Event Processing is designed to provide high throughput and low latency processing against continuously streaming data—what is sometimes call Complex Event Processing. It handles real-time correlation between incoming data streams, time sensitive alerts, aggregations and calculations, pattern seeking in the stream etc. Critically, especially when one is dealing with very large volumes of streaming data, actions can be taken by processing data before it gets stored.
Oracle Event Processing is an Equinox OSGi™-based Java Application Server. The server uses adaptors to connect to incoming data streams (JMS, HTTP, data streams etc) and then converts this data to POJOs for processing inside the server. CQL – Continuous Query Language (a variant of SQL designed for streaming data) – is used to process these streams. Listeners or “sinks” written in Java then interface with other systems to take appropriate action. Within the server several components such as security, logging, JMS access etc are OSGi bundles reused and shared with the Oracle WebLogic Server. Because Oracle Event Processing is OSGi based it supports being stripped down, with modules removed, to run on smaller devices as well as scaling up to very large devices. This allows it to run on a wide range of devices from edge devices to gateway devices to enterprise data servers. This ability to run on very small devices is particularly important for handling the internet of things, one of the use cases for this kind of event processing.
Like most steaming products Oracle Event Processing is a little like an inverted database – the data is dynamic and the queries are static where a normal database has static data being processed by dynamic queries. This means that as new data elements, new events, stream in they are processed against all the defied queries for that channel. A certain number of events or all the events in a time window can be kept in memory as part of each channel.
Each adaptor results in a channel that is processed by a CQL processor (though several channels can be processed by a single CQL processor and channels can feed several CQL processors). CQL Processors can output new channels, can join databases and channels, and can access data stored in Coherence, Oracle’s distributed caching infrastructure. The pattern matching features of CQL (Continuous Query Language) was accepted as an ANSI SQL standard in 2012, and has now been implemented in the Oracle 12 Database. The end points are a set of output adaptors that connect to a business process, a decision engine like Oracle RTD or even just send emails. This network of adaptors, channels, processors, cache, Event Beans and Event Sinks (event handling Java code) is as an Event Processing Network and is the core of the product.
Oracle Event Processing CQL Processors supports
- Pattern matching in an event stream such as detecting a specific trading pattern or temperature pattern using regular expressions.
- Detecting the absence of events or missing events in a sequence.
- Continuous querying to filter streams, calculate running totals or averages etc. using structures like Group By
- Joining of event streams with data stored in a relational database, a NoSQL datastore or a Coherence cache
- Geospatial and other java-based processing libraries
All of this allows for event streams to be selectively processed and processed multiple ways. Oracle Event Processing is used to power lots of dashboards with streaming insight, send emails and increasingly to invoke a decision engine like Oracle Real-Time Decisions (reviewed here) to decide what to do in response to the event.
You can get more information on Oracle Event Processing here.
Cross-posted from JTonEDM.