Tag Archives: Big Data

Karate SQL? KSQL vs. Kafka Streams

Just kidding -:) No Karate SQL I am aware of..

Naturally you would use Kafka Streams if your code runs on Java where your code requires SQL like access to the data.

“Kafka Streams is the core API for stream processing on the JVM: Java, Scala, Clojure etc. It is based on a DSL (Domain Specific Language) that provides a declaratively-styled interface where streams can be joined, filtered, grouped or aggregated using the DSL itself. It also provides functionally-styled mechanisms — map, flatMap, transform, peek, etc”

(Building a Microservices Ecosystem with Kafka Streams and KSQL
https://www.confluent.io/blog/building-a-microservices-ecosystem-with-kafka-streams-and-ksql/
via Instapaper)

Checkout KSQL, the Kafka Streams client for cases where you want to run SQL queries vs Kafka outside a JVM.

You can set a KSQL container as a side car along with your app container and let the app act upon regular Kafka Topic events, discarding the need for the app to deal with the data query logic needed to find relevant data off the stream.

Example: Your micro service needs to act upon a new customer order. Your sidecar container will run KSQL DSL select and stream only relevant event data to your app one at a time (configurable).

KSQL will get a copy of the same data across your micro services replicas.

Sounds like fun? Well because it is!

Maybe it should be called Karate SQL after all..

P.S.

If you use AWS, and need Kafka (otherwise you would use AWS Kinesis), here is a nice basic starter automation for setting Kafka on AWS.

AWS Athena says No so beautifully

Amazon AWS Athena allows you run ANSI SQL directly against your S3 Buckets supporting a multitude of file formats and data formats

Here are my insights taken from a comprehensive YouTube session lead by Abhishek Sinha

  • No ETL needed
  • No Servers or instances
  • No warmup required
  • No data load before querying
  • No need for DRP – it’s multi AZ

Uses Presto (in memory data distributed data query engine) and HIVE (DDL table creation to reference to your S3 data)
You pay for the amount of data scanned, so you can optimize the performance as well as cost, if you:

  1. Compress your data
  2. Store it in a columned format
  3. Partition it
  4. Convert it to Parquet / ORC format

Querying in Athena:

  1. You can query Athena via the AWS Console (dozens of queries can run in parallel) or using any JDBC enabled tool such as SQL Workbench
  2. You can stream Athena queries results into S3 or AWS Quick Sight (Spice)
  3. Creating a table for query in Athena is merely writing a schema that you later refer to
  4. Table Schema you create for queries are fully managed and Highly Available
  5. Queries will act as the route to the data so every time you execute the Query it re-evaluates everything in the relevant buckets
  6. To create a partition you specify a key value and then a bucket and a prefix that points to the data that correlates with this partition

Just note that Athena serves specific use cases (such as non urgent ad-hoc queries) where other Big Data tools are used to fulfill other needs – AWS Redshift is more aimed at quickest query times for large amounts of unstructured data, where AWS Kinesis Analytics is aimed at queries of rapidly streaming data.

Want to learn more on Big Data and AWS? Visit http://allcloud.io

GCP Big Table – main facts

GCP Big Table – main facts:

Is the basis of many google products
Object storage system

Does not offer indexes except for a single range index you can use

Is the basis for Hadoop big data system

You pay for storage separately

You pay for min 3 nodes and can expand as you need

Nodes are needed just for read / write – not for storage

Support for massive amounts of reads / writes but not locking or transaction support

Is not completely and highly available since sometimes data is not available as it is moved around

Great for big queries, less for short quick rapid ones

https://cloud.google.com/bigtable/