Write to a Stream
This page explains how to use the Macrometa Stream Databricks Client Connector to integrate Apache Spark with Macrometa streams, allowing you write data you have processed and analyzed in a Databricks environment to Macrometa streams.
- Set up your target options: - val targetOptions = Map(
 "regionUrl" -> "<REGION_URL>",
 "port" -> "<PORT>",
 "apikey" -> "<APIKEY>",
 "fabric" -> "<FABRIC>",
 "tenant" -> "<TENANT>",
 "replication" -> "<REPLICATION>",
 "stream" -> "<TARGET_STREAM>",
 "checkpointLocation" -> "<CHECKPOINT_LOCATION>"
 )
- Write to the Macrometa stream. The code block below assumes the stream you are reading data from has the property 'symbol', 'ma'. Replace it with your own schema: - val query = inputStream.select("symbol","ma")
 .withColumnRenamed("ma", "value")
 .writeStream
 .format("com.macrometa.spark.stream.MacrometaTableProvider")
 .options(targetOptions)
 .start()
- Wait for termination: - query.awaitTermination()