Read & Write to Fabric Warehouse from Notebooks | Microsoft Fabric Tutorial #fabrictutorial

Read & Write to Fabric Warehouse from Notebooks | Microsoft Fabric Tutorial

Read & Write to Fabric Warehouse from Notebooks

In this tutorial, you'll learn how to read data from a Microsoft Fabric Warehouse and write it to a Lakehouse — and vice versa — all within a Fabric Notebook using PySpark. This allows for seamless integration between the structured warehouse and unstructured or semi-structured file storage.

📥 1. Read Table from Fabric Warehouse & Write to Lakehouse Files

from datetime import datetime
import com.microsoft.spark.fabric

# Read from Warehouse
df = spark.read.synapsesql("TechDWH.dbo.Employees")
display(df)

# Generate timestamped filename
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
output_path = f"Files/Employees_{timestamp}.parquet"

# Write to Lakehouse
df.write.mode("overwrite").parquet(output_path)

print(f"Data written to: {output_path}")

📤 2. Read from Lakehouse & Write to Warehouse Table

from datetime import date
import com.microsoft.spark.fabric

# Read CSV file from Lakehouse
order_df = spark.read.option("header", True).csv("Files/order.csv")
display(order_df)

# Generate table name with today’s date
today = date.today().strftime("%Y%m%d")
table_name = f"TechDWH.dbo.Order_{today}"

# Write to Fabric Warehouse table
order_df.write.mode("overwrite").synapsesql(table_name)

print(f"Data written to table: {table_name}")

✅ Benefits of Using Fabric Notebooks

  • Dynamic integration between Warehouse and Lakehouse
  • Use of PySpark APIs for flexible data manipulation
  • Fast prototyping and data engineering workflows
  • Great for automation and scheduled execution

🎬 Watch the Full Tutorial

Blog post written with the help of ChatGPT.