|

Managing my Dev Codebase and Workflow

Note: Written with the help of my research and editorial team 🙂 including: (Google Gemini, Google Notebook LM, Microsoft Copilot, Perplexity.ai, Claude.ai and others as needed)

Taming the Chaos with NAS, Git, and a Backup Plan

As a hobbyist developer, my code situation used to be… well, let’s call it “organically grown chaos.” I have a sprawling collection of projects: some are deeply committed to Git (sometimes synced to GitHub, sometimes just local), others are simple scripts or experiments that never made it into version control, and a few are just old ideas I haven’t touched in years.

Adding to the complexity, I often work on two different laptops, and my trusty Network Attached Storage (NAS) is the central hub for everything. My old workflow of manually dragging folders between my NAS and my laptop, or worse, trying to make OneDrive “sync” a Git repo, led to headaches, lost work, and endless frustration.

So, I developed a new system, and I want to share it because it might just save another hobbyist from the same pain!

The Goal: Safety, Speed, and Sanity Across My Devices

My primary goals for a new workflow were clear:

  • Data Safety: Never lose code, whether it’s a Git project or a simple script.
  • Cross-Device Access: Work seamlessly on either laptop, online or offline.
  • Performance: No more waiting for cloud drives to sync thousands of tiny files.
  • Simplicity: A system I can actually stick to, even after a long day at work.

The Storage Trade-off: Why Not Everything is in Git

One reality every hobby developer eventually hits is that local Git repositories can get massive. Because Git’s core design involves keeping the entire history of every file change locally on your machine, a long-running project—especially one with large assets, binary files, or frequent changes—can quickly balloon into gigabytes. For a developer working on laptops with limited SSD space, keeping a dozen heavy Git histories locally is a major challenge. This is precisely why not all of my projects are under source control. For smaller scripts or “quick-and-dirty” experiments, the simplicity of a non-Git folder managed by a fast robocopy backup is often more practical than the storage overhead and management required by a full Git repository.

The Problem with OneDrive (and other Cloud Drives) for Code

Before diving into the solution, it’s crucial to understand why cloud drives like OneDrive, Google Drive, or Dropbox are generally a bad idea for active code repositories:

  1. Repository Corruption: Git relies on thousands of small, interlinked files in its hidden .git folder. When a cloud drive tries to sync these files while Git is actively modifying them, you risk “race conditions” or partial syncs that can corrupt your entire project history.
  2. Performance Hit: Code projects often have massive dependency folders (like node_modules or venv) with thousands of files. Cloud drives try to sync every single one, which is incredibly slow and eats bandwidth.
  3. Conflict Hell: If a cloud drive creates “conflict copies” of Git’s internal files, Git simply won’t know what to do, leading to a broken repository.

My solution avoids these pitfalls by treating Git projects, non-Git projects, and active work differently.

My Workflow: NAS as the Hub, Laptop as the Workbench

Here’s how I structure my code management:

1. The NAS: My Central Code Library

My NAS is the “source of truth” and acts as my project archive. I’ve organized it into two main directories:

  • Z:\DevCode\git: This is where all my Git-controlled projects live. Critically, these are “Bare Repositories.” They act as the Git database, not a working copy of the code. My laptops “push” and “pull” from these.
  • Z:\DevCode\nonGit: This folder houses all my experiments, one-off scripts, and projects that just don’t warrant Git yet.
  • Z:\DevCode\nonGit_Backups: This is where my non-Git projects get safely backed up.

2. The Laptop: My Active Workbench

I have a couple of laptops that I work with, depending on where I am. Generally, I do all my active coding on my laptop’s local SSD in a dedicated folder: C:\LocalData\DevWorking. This ensures maximum speed and isolates my live work from any network or cloud syncing issues.

Remote Access: Many times, I access my “home” system remotely if I do not have time to sync or copy the code. This works perfectly and is very secure, though the experience depends on my internet speed at the time.

The Workflow in Practice: Two Distinct Paths

Path A: For Git-Controlled Projects (The Safe & Seamless Way)

When I’m working on a project that uses Git, here’s the flow:

  1. Setting Up: On my NAS, I create a folder (e.g., MyProject.git) and run git init --bare.
  2. Connecting: On my laptop, I initialize my project and connect it to the NAS: git remote add nas Z:\DevCode\git\MyProject.git
  3. Daily Routine: I make changes and git commit regularly. When done, I run git push nas.
  4. Switching Laptops: On Laptop B, I simply run git pull nas and I am exactly where I left off.

Path B: For Non-Git Projects (The Scheduled Backup Way)

For casual projects, I use a simple, powerful batch script to manage backups. I never try to sync these folders directly.

  1. Daily Routine: When I finish working, I run a custom batch script from my desktop.
  2. What it does: It creates a new, timestamped folder on the NAS (e.g., Backup_2025-12-31_1430) and copies everything from my laptop.
  3. Efficiency: Crucially, it excludes all “junk” folders like node_modules, venv, and .vs.

The Backup Script (BackupNonGit.bat):

Code snippet

@echo off
SETLOCAL

SET LOCAL_PATH=C:\LocalData\DevWorking
SET NAS_BASE_PATH=Z:\DevCode\nonGit_Backups

for /f "tokens=2-4 delims=/ " %%a in ('date /t') do (set mydate=%%c-%%a-%%b)
for /f "tokens=1-2 delims=: " %%a in ('time /t') do (set mytime=%%a%%b)
set TIMESTAMP=%mydate%_%mytime%
set FINAL_BACKUP_PATH=%NAS_BASE_PATH%\Backup_%TIMESTAMP%

echo STARTING TIMESTAMPED BACKUP TO: %FINAL_BACKUP_PATH%

robocopy "%LOCAL_PATH%" "%FINAL_BACKUP_PATH%" /E /R:0 /W:0 /XD .git node_modules venv .vs __pycache__ target bin obj

echo Backup Complete!
pause

Why this works for a hobby developer like me:

  • No More Corruption: No cloud service is messing with my .git folders.
  • Offline First: I always work on my local drive; internet speed is only a factor when I’m ready to sync.
  • Flexible Transition: If a nonGit project gets serious, I git init it and move it into the Git workflow without losing history.
  • “Time Machine”: The timestamped backups give me a basic version history for non-Git projects, preventing accidental loss.

This system isn’t as fully automated as a cloud-synced world, but for a hobby developer juggling diverse projects and multiple machines, it provides the perfect balance of safety, speed, and manual control.

Disclaimer:  I work for Dell Technology Services as a Workforce Transformation Solutions Principal.    It is my passion to help guide organizations through the current technology transition specifically as it relates to Workforce Transformation.  Visit Dell Technologies site for more information.  Opinions are my own and not the views of my employer.