Data Analysis with Python and PySpark

Jonathan Rioux

Language: English

Publisher: Manning

Published: Apr 12, 2022

Description:

Think big about your data! PySpark brings the powerful Spark big data processing engine to the Python ecosystem, letting you seamlessly scale up your data tasks and create lightning-fast pipelines.

In Data Analysis with Python and PySpark you will learn how to:

Manage your data as it scales across multiple machines
Scale up your data programs with full confidence
Read and write data to and from a variety of sources and formats
Deal with messy data with PySpark’s data manipulation functionality
Discover new data sets and perform exploratory data analysis
Build automated data pipelines that transform, summarize, and get insights from data
Troubleshoot common PySpark errors
Creating reliable long-running jobs

Data Analysis with Python and PySpark is your guide to delivering successful Python-driven data projects. Packed with relevant examples and essential techniques, this practical book teaches you to build pipelines for reporting, machine learning, and other data-centric tasks. Quick exercises in every chapter help you practice what you’ve learned, and rapidly start implementing PySpark into your data systems. No previous knowledge of Spark is required.

About the technology
The Spark data processing engine is an amazing analytics factory: raw data comes in, insight comes out. PySpark wraps Spark’s core engine with a Python-based API. It helps simplify Spark’s steep learning curve and makes this powerful tool available to anyone working in the Python data ecosystem.

About the book
Data Analysis with Python and PySpark helps you solve the daily challenges of data science with PySpark. You’ll learn how to scale your processing capabilities across multiple machines while ingesting data from any source—whether that’s Hadoop clusters, cloud data storage, or local data files. Once you’ve covered the fundamentals, you’ll explore the full versatility of PySpark by building machine learning pipelines, and blending Python, pandas, and PySpark code.

What's inside

Organizing your PySpark code
Managing your data, no matter the size
Scale up your data programs with full confidence
Troubleshooting common data pipeline problems
Creating reliable long-running jobs

About the reader
Written for data scientists and data engineers comfortable with Python.

About the author
As a ML director for a data-driven software company, Jonathan Rioux uses PySpark daily. He teaches the software to data scientists, engineers, and data-savvy business analysts.

Table of Contents

1 Introduction
PART 1 GET ACQUAINTED: FIRST STEPS IN PYSPARK
2 Your first data program in PySpark
3 Submitting and scaling your first PySpark program
4 Analyzing tabular data with pyspark.sql
5 Data frame gymnastics: Joining and grouping
PART 2 GET PROFICIENT: TRANSLATE YOUR IDEAS INTO CODE
6 Multidimensional data frames: Using PySpark with JSON data
7 Bilingual PySpark: Blending Python and SQL code
8 Extending PySpark with Python: RDD and UDFs
9 Big data is just a lot of small data: Using pandas UDFs
10 Your data under a different lens: Window functions
11 Faster PySpark: Understanding Spark’s query planning
PART 3 GET CONFIDENT: USING MACHINE LEARNING WITH PYSPARK
12 Setting the stage: Preparing features for machine learning
13 Robust machine learning with ML Pipelines
14 Building custom ML transformers and estimators