Skip to main content

Blog

Cloning all my GitHub repos faster than the default

A Bash + Git + jq script to fetch every repo I own with submodules, handling pagination and dry-run previews.

Date

14-10-2025

Tags
GitAutomationBashGitHub API

Why

I wanted a faster, zero-click way to back up every GitHub repo I own, including submodules, without relying on the GitHub UI. The goal: a single command that pulls everything down (or simulates it) using SSH.

What I built

  • Bash script using curl + GitHub API + jq to list all repos (with pagination).
  • Clones via git clone --recurse-submodules to keep submodules in sync.
  • --dry-run flag to preview what would be cloned without touching disk.
  • Skips already-present repos to avoid redundant work.

How it works (high level)

  1. Requires GITHUB_TOKEN with repo metadata read access; uses SSH for cloning.
  2. Fetches paginated repo lists via GitHub API, writes targets into a loop.
  3. For each repo: clone with submodules if missing; otherwise skip.
  4. Optional dry-run prints planned actions only.

Notes

  • Set GITHUB_TOKEN in your environment and ensure ssh-agent is running with your key added.
  • Configure CLONE_DIR in the script to choose where backups land.
  • Designed for Linux/macOS with Bash + curl + jq + git available.
Source