Skip to main content

Understanding Execution

To run code in Alph notebooks, you need a kernel - a computational engine that executes your code and returns results. Alph provides two ways to get a kernel:
  1. Projects: Full-featured cloud compute environments (recommended)
  2. Quick Execute: Serverless execution (coming soon)

Projects and Kernels

Projects are Docker-based computational environments running on Alph’s cloud infrastructure.

What is a Project?

A project provides:
  • Jupyter kernel: Python, R, or Julia execution environment
  • Compute resources: CPU, memory, and optional GPU
  • Persistent storage: Files and data persist across sessions
  • Terminal access: Full shell access to the environment
  • Port forwarding: Run web apps and services
Think of a project as your personal cloud computer for data science work.

Creating a Project

1

Navigate to Projects

From your organization dashboard, click Projects
2

Create new project

Click New Project and configure:Basic Settings:
  • Name: Descriptive project name
  • Slug: URL-friendly identifier
Compute Type:
  • Micro: 0.5 CPU, 1GB RAM (free tier)
  • Small: 2 CPU, 4GB RAM (Pro tier)
  • GPU: 4 CPU, 16GB RAM, T4 GPU (Expert tier)
Environment:
  • Python version: 3.9, 3.10, 3.11
  • Conda/pip: Pre-install packages
3

Launch project

Click Create - your project will start in ~30-60 seconds

Connecting a Notebook to a Project

  • From Notebook Editor
  • From Project Page
  1. Open your notebook
  2. Click the Kernel dropdown in the toolbar
  3. Select Connect to Project
  4. Choose your project from the list
  5. Kernel connects automatically
One kernel can only connect to one notebook at a time. Multiple notebooks can use the same project, but you’ll need to disconnect one before connecting another.

Running Cells

Once connected to a kernel, you can execute code cells.

Execution Methods

MethodShortcutBehavior
Run cellShift + EnterExecute and select next cell
Run in placeCtrl/Cmd + EnterExecute without moving selection
Run and insertAlt + EnterExecute and insert new cell below
Run allToolbar buttonExecute all cells top to bottom
Run selectedToolbar buttonExecute multiple selected cells

Execution Status

Cells show their execution state:

Queued

[*] - Waiting to execute

Running

[*] - Currently executing (animated)

Complete

[5] - Execution number shown

Cell Outputs

Alph renders various output types:
  • Text & Data
  • Visualizations
  • Rich Media
  • Errors
  • Print statements
  • Return values
  • Dataframe displays
  • JSON output
  • HTML rendering

Kernel Management

Kernel States

Your kernel can be in several states:
  • Idle: Ready to execute code
  • Busy: Currently running code
  • Starting: Kernel is initializing
  • Dead: Kernel crashed or stopped
  • Disconnected: Connection lost
Check kernel status in the toolbar indicator.

Kernel Operations

Restart to clear all variables and reset state:
  1. Click Kernel menu
  2. Select Restart Kernel
  3. Optionally run all cells after restart
When to restart:
  • After installing packages
  • When variables are in unexpected state
  • To ensure reproducibility
Stop currently running code:
  1. Click the Stop button (or press I I)
  2. Kernel attempts to interrupt execution
Use when:
  • Code is taking too long
  • You notice an error and want to stop
  • Infinite loop is running
Completely stop the kernel:
  1. Disconnect notebook
  2. Stop project (optional)
Saves resources when you’re done working

Installing Packages

Add Python packages to your project environment.

Using pip

# Install a single package
!pip install pandas

# Install specific version
!pip install numpy==1.24.0

# Install multiple packages
!pip install scikit-learn matplotlib seaborn

# Install from requirements file
!pip install -r requirements.txt

Using conda

# Install from conda-forge
!conda install -c conda-forge xgboost

# Update package
!conda update pandas
After installing packages, restart your kernel to ensure proper loading.

Pre-installed Packages

Alph projects come with common data science packages:
  • Data: pandas, numpy, polars
  • Viz: matplotlib, seaborn, plotly
  • ML: scikit-learn, xgboost
  • DL: tensorflow, pytorch (GPU projects)
  • Utils: requests, beautifulsoup4, jupyter

Working with Data

Loading Data Files

  • Upload Files
  • From URLs
  • From GitHub
  • From Cloud Storage
Upload files through the project interface:
  1. Navigate to your project
  2. Click Files tab
  3. Drag and drop or click to upload
  4. Access from notebook:
import pandas as pd
df = pd.read_csv('uploads/data.csv')

Environment Variables

Set environment variables for API keys and configuration:
import os

# Set environment variable
os.environ['API_KEY'] = 'your-api-key'

# Access environment variable
api_key = os.environ.get('API_KEY')
Never commit notebooks with API keys or secrets to public repositories. Use environment variables and exclude them from version control.

Terminal Access

Projects provide full terminal access for advanced operations.

Opening a Terminal

  1. Navigate to your project
  2. Click Terminals tab
  3. Click New Terminal

Common Terminal Tasks

# Install system packages
sudo apt-get update
sudo apt-get install graphviz

# Clone git repository
git clone https://github.com/user/repo.git

# Run scripts
python script.py

# View logs
tail -f logs/app.log

# Start web server
python -m http.server 8000

Resource Management

Monitoring Usage

Track resource consumption:
  • CPU usage: Real-time CPU utilization
  • Memory: RAM usage and available
  • Disk: Storage used/available
  • GPU (if applicable): GPU memory and utilization
View metrics on the project dashboard.

Compute Limits

Different compute types have different limits:
TypeCPURAMGPUStorage
Micro0.5 core1GB-10GB
Small2 cores4GB-50GB
GPU4 cores16GBT4100GB

Upgrade compute

Need more resources? Upgrade your plan

Auto-shutdown

Projects auto-shutdown after inactivity to save resources:
  • Hobby tier: 1 hour of inactivity
  • Pro tier: 4 hours of inactivity
  • Expert tier: 12 hours of inactivity
Configure in project settings.

Troubleshooting

Possible causes:
  • Project is stopped or starting
  • Another notebook using the kernel
  • Network connectivity issues
Solutions:
  • Check project status
  • Disconnect other notebooks
  • Refresh the page
  • Restart the project
Symptoms: MemoryError or kernel crashesSolutions:
  • Process data in chunks
  • Delete large unused variables with del
  • Restart kernel to clear memory
  • Upgrade to larger compute type
Causes: Insufficient resources, inefficient codeSolutions:
  • Profile code to find bottlenecks
  • Vectorize operations (use pandas/numpy)
  • Upgrade compute type
  • Use GPU for ML workloads
Issue: Packages won’t install or conflictSolutions:
  • Create fresh conda environment
  • Use virtual environment
  • Pin package versions
  • Check compatibility

Next Steps