Best Way to Setup Local Plasmic Studio on WSL

The best way to get started developing for the open source Plasmic Studio I was told is to use the postgres docker, but install everything else manually. However, I had some challenges with the second part of that.

On WSL, I ran the command I got from the documentation: docker-compose up -d --no-deps plasmic-db which worked perfectly. Granted, I needed Docker Desktop open in Windows to see it. Always forget that part. After that, I installed brew, then asdf via brew, but it would error out. Turns out I needed to add shims path to .bashrc by tacking on this line at the end: export PATH=“${ASDF_DATA_DIR:-$HOME/.asdf}/shims:$PATH”. After that, the asdf install commands worked.

Next came the really hard part. While yarn install worked in both the root plasmic and platform/wab directories… No matter what I tried, I could not get the data to seed. Yarn seed did nothing but error out.

I turned to the dockerfile and ran it through AI to convert it to Ubuntu bash commands. This is what it came up with. I followed it verbatim, and it worked like a charm.

# WSL Ubuntu bootstrap for Plasmic
set -euxo pipefail

# 0) System deps
sudo apt update
sudo apt install -y \
  git curl jq bash build-essential python3 python3-pip postgresql-client pkg-config procps libpq-dev

# 1) Node/Yarn (keep Yarn v1)
corepack enable || true
corepack prepare yarn@1.22.21 --activate

# 2) Repo deps + bootstrap
cd ~/plasmic
yarn install --frozen-lockfile --prefer-offline
mkdir -p ~/.plasmic
cp platform/wab/tools/docker-dev/secrets.json ~/.plasmic/secrets.json
yarn setup
yarn setup:canvas-packages

# 3) WSL niceties
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf >/dev/null || true
sudo sysctl -p || true
export NODE_OPTIONS="--max_old_space_size=4096"

# 4) DB env (adjust if needed)
export PGHOST=${PGHOST:-localhost}
export PGPORT=${PGPORT:-5432}
export PGUSER=${PGUSER:-$USER}
export PGPASSWORD=${PGPASSWORD:-SEKRET}
export PGDATABASE=${PGDATABASE:-plasmic}

# Optional: create DB if missing
createdb -h "$PGHOST" -p "$PGPORT" -U "$PGUSER" "$PGDATABASE" 2>/dev/null || true

# 5) Migrate, seed, run
cd platform/wab
tmp=$(mktemp)
jq --arg host "$PGHOST" --arg pass "$PGPASSWORD" '.host=$host | (.password //= $pass)' ormconfig.json > "$tmp" && mv "$tmp" ormconfig.json

# Run migrations then seed
yarn typeorm migration:run
PGHOST=$PGHOST PGPORT=$PGPORT PGUSER=$PGUSER PGPASSWORD=$PGPASSWORD PGDATABASE=$PGDATABASE yarn seed

# Start dev (from repo root)
cd ../..
yarn dev

In order for this to work, the other parts of the Contribution Guide like the .env file, the asdf plugins, etc. have to be setup.

Thought I’d share my success and tricks. Maybe it will inspire a documentation update.

Thanks for the explanations!

If seeding failed when you ran Postgres in docker, it sounds like either a credentials or networking issue. The correct postgres database credentials are needed in the environment running the seed command, as well as access to postgres on the default port.

So if Postgres is available (e.g. you can connect with psql in your current shell), and you have the correctenv set (PGHOST/PGPASSWORD/PGDATABASE) from your AI script, the original setup should work (it’s similar to how most of use develop Plasmic locally).

Postgres was available via the docker command to just run the postgres on docker, and the correct .env was added to each locations. Something else in the list of commands that the AI outputted made this work. Maybe it was pipefail, maybe a system dependecy, maybe it was enabling corepack, maybe it was setup:canvas-packages, maybe it was createdb, but it was defintely something before the yarn typeorm migration:run command. I know, because I tried running that command instead of yarn seed, and it gave similar errors.

I don’t have a Windows machine to test with at the moment, but I’ll try to replicate the issue at some point.

Without seeing the actual error it’s hard to know for sure, but if your initial migrate attempt failed and that script succeeded, it’s very likely to be a credential problem.

Just to be clear, the AI script is doing things correctly, so it’s probably our documentation and/or Docker files that need to be improved. It is necessary to run migrate before seed, and you need manually set up the credentials locally before running both of those commands.