Strokify | My Image Outline App
Devan Lee
App development has always been something I’ve wanted to dive into. I’ve always had this insatiable drive to create — whether it’s YouTube videos, music, or apps — and I’ve been fascinated by the idea of building apps since I was young. I never had the chance to dedicate myself to learning it until I became an adult.
I see app development not just as a potential path to financial success, but as a way to solve real problems — problems I face and that might help others too. That’s what creating apps should be about.
I’m also incredibly grateful to live in a time where I can learn development and even have AI to assist me. Honestly, without AI, creating an app would have been much harder.
Now, I’m proud to say I’ve successfully built my first web app: Strokify. It’s a simple tool that removes the background from an image and adds an outline stroke to emphasize the subject.
It might seem basic, but that’s intentional — I wanted to start with something manageable for my first app. It’s not going to make it to Y Combinator tomorrow, but it’s something I’m genuinely proud of, and I’m excited to share it with you.
Speaking of Y combinator, one of my life goals is to deploy an app that gets funding from Y combinator, lol. But that's another story for another chapter of my life.
Anyways, introducing Strokify! :
The Problem
Ever tried adding a stroke outline to an object in a photo? It's either accomplished only by using expensive software or by using sketchy online tools that produce mediocre results.
Our Solution
Strokify solves this. It's a Fast, accurate, and completely free option.
We provide an accurate, efficient, and quick way to customize photos and apply strokes to objects in images—without the need for advanced image alteration and graphic design software.
Why It Matters
Applying a stroke to an image's object can emphasize and focus on its appearance, giving it direct attraction for viewers.
From content creators needing stroked objects for thumbnails, to legitimate companies needing materials for their own projects—Strokify is the solution.
You can check out the app here:
https://strokify-app.netlify.app
- Development -
I had the idea for this app in October, and finished and published the app in November. This is my first ever web app that I have created, and I am really proud of it.
The reason why I created this app was to solve a problem that I thought other people were experiencing. When I was developing my YouTube video, I created a thumbnail depicting an outlined figure, that I manually outlined with a white stroke around it:
The guy in the center of the photo — the one outlined in blue — was actually the inspiration behind building this app. I realized that not everyone has the tools or resources to recreate that outlined-photo look for their thumbnails or other projects. So I thought, why not build something that solves that problem for people? It would give me hands-on experience with app development, and it would also be a great first project to learn how to build and publish an actual app.
I set up a detailed outline on what I wanted to do, the problem I seek to fix, the problems in my way, and what I wanted to accomplish:
Purpose:
The app will take an image, and will determine the object(s) of interest, and apply a stroked-outline to the object to distinguish it from the background.
- The stroke applied would be default White, and the user would have the option to determine the strokes’ color, size, inline glow, and outline glow.
- The user would also have the ability to crop the image out of the background.
There can be multiple objects in one picture that can be highlighted / unhighlighted. These highlights can change the front-to-back position so that they cohesively attach to each other.
Ai will be used in the process of identifying objects in the image and applying an accurate and detailed outline.
User interface:
The app will be easily accessible, and have a simple, reactive UI to promote ease of use. A file uploader that the user can use to import files, and an interface that they can use to determine and change the outlines in their images.
The user would be able to export their images in JPG, PNG, IMG, or SVG.
The amount of time needed for a user to complete their image should take no longer than 3 minutes for very simple designs.
Purpose:
The purpose of this app is to give users an easy-access browser tool in which they can create image outlines. The intended service is for users seeking to create thumbnails for Youtube, images for Social media, to emphasize an object in an image, or for general outline / app experimentation.
Pricing:
The access to the app will be completely free - the monetization of the app will be received through ads placed on the site.
Competition:
There are many apps that promote the same features described above.
This project will be done as a challenge for myself.
The main differentiation of this project from other resources is to work as a relatively simpler and quicker tool.
------
The only issue was… I had no idea how to make an app. Zero clue.
So I spent about a week or two learning the fundamentals of React. My first real project was the classic default React tic-tac-toe tutorial. At first, it was tough to remember how everything worked — React, TypeScript, JavaScript (which I absolutely hate, by the way). But once things started clicking, the front end became surprisingly manageable.
The real challenge came afterward: the backend.
Tech Stack Breakdown
Frontend (React)
- React + Vite (faster than Create React App)
- Fabric.js or Konva.js - for canvas manipulation and outline rendering
- React Dropzone - for file uploads
- Tailwind CSS - quick, responsive styling
Backend Options
You have two main approaches:
Option 1: Serverless/API-based (Recommended for MVP)
No traditional backend server needed:
-
Background Removal API:
- Remove.bg API - best quality, has free tier
- Cloudinary AI Background Removal
- Clipdrop API
-
Object Detection (if you need specific object boundaries):
- Google Cloud Vision API
- AWS Rekognition
For the backend, I decided to use something easy, and free, and I used the Remove.bg API.
The plan was to send the image to the remove.bg api, remove the background of the image and get the object's focus, and then have my app apply the outline stroke, in which the user could customize it.
I used React Native for the frontend.
Firebase as the backend.
Remove.bg as the API call.
Netlify as the hosting service
I ran into some issues when creating and deploying this app:
First Bug API calls not tracking:
-
What triggered the bug? The “Process Image” button always looked usable—even when the user had run out of free strokes or their API token wasn’t set up. The UI only learned how many tries were left after a successful request, so any failure (missing API key, expired login, quota exceeded) fell back to the same “Failed to process image” alert.
-
What was happening behind the scenes?
The backend had good guardrails: it checked for a Firebase token, enforced three free uses per day, and called remove.bg with your API key. But the frontend never asked the server “How many tries do I have before I hit the limit?” So the app felt broken even though the server was just doing its job.
-
How I solved it:
Added a lightweight
GET /api/usage-statusendpoint that reports each user’s remaining tries (without incrementing anything).On the client, fetched that number as soon as the user signed in and displayed it right above the upload button, color-coded (green when available, red when exhausted).Disabled the “Process Image” button when no tries remain and showed a clear warning instead of letting users slam into the same generic error.
-
Why it matters:
By exposing the server’s state and reflecting it in the UI, you turned a mysterious failure into a predictable, friendly experience. Users now know exactly how many free strokes they get, the button politely grays out when they hit the limit, and any genuine server errors come with detailed logs for debugging.
Second Bug API not connecting:
My background-removal API worked flawlessly on my laptop but exploded on Netlify with Failed to remove background. Locally, the .env file fed the API key to the function; in production, Netlify didn’t know about that key, so requests reached remove.bg with “no auth,” and the service rejected every image. Adding the API key to Netlify’s environment variables and letting the function read it there fixed everything.
What was actually happening?
-
The serverless function remove-background.js pulls the remove.bg API key from
process.env.REMOVE_BG_API_KEYeach time it runs
-
On my machine,
dotenvloaded that key from
.env, so the HTTP POST to
https://api.remove.bg/v1.0/removebgsucceeded and returned clean PNGs (@netlify/functions/remove-background.js#165-195).
-
Netlify, however, didn’t have that environment variable set, so the function hit the “API key missing” branch and remove.bg returned an error. The user-facing message boiled down to “Failed to process image,” even though the real issue was auth.
Debugging steps & root cause
-
Reproduced locally vs. production
: local build worked; Netlify build didn’t → likely config mismatch.
-
Added diagnostics
: logged whether
process.env.REMOVE_BG_API_KEYexisted and dumped remove.bg’s error payload. Production logs showed “API key missing” despite the same code succeeding locally (@netlify/functions/remove-background.js#40-190).
-
Checked Netlify config
: no environment variable had been defined for the hosted function.
-
Conclusion
: Netlify requests were unauthenticated, so remove.bg refused to process the image.
How it was solved
-
Expose the API key to Netlify
Added a netlify.toml declaring the build command and specifying that
REMOVE_BG_API_KEY,
FREE_LIMIT_DISABLED, and
NODE_VERSIONmust come from Netlify’s environment (@netlify.toml#1-21).Inside Netlify’s dashboard (Site settings → Build & deploy → Environment), created
REMOVE_BG_API_KEYwith the real key (and optionally
FREE_LIMIT_DISABLED).
-
Keep local dev easy but safe
Installed
dotenvand added
require('dotenv').config()so local runs still pick up
.env(@netlify/functions/remove-background.js#1-4).Added
.envto .gitignore so secrets never hit git.
-
Deploy + verify
After redeploying, the Netlify function now sees the same environment variables as local, and remove.bg responds successfully across both environments.
Stroking Logic
As for the stroking logic, I implemented the coded image after the API call to a canvas in which I then used a helper function to help add the stroke to the image. It works in three stages.
1. Canvas Setup
After the transparent PNG is processed, the image is drawn onto an off-screen canvas with extra padding equal to the stroke width. This prevents the outline from getting clipped.
The function then extracts the pixel data with ctx.getImageData and prepares two arrays:
the raw pixel data
a strokeMask array for tracking which pixels should become part of the outline
The target RGB stroke color is also stored here.
2. Edge Detection (First Pass)
The algorithm scans every pixel in the image. For any transparent pixel (alpha < 128), it checks a circular neighborhood around it with a radius equal to the stroke width. The radius is enforced using a Euclidean distance check:
sqrt(dx² + dy²) <= strokeWidth
If any pixel inside that circular region is part of the original subject (alpha > 0), then the current transparent pixel is marked in strokeMask as an outline pixel.
This ensures a natural, circular edge — not the blocky “diamond” shape you get from Manhattan-distance checks.
3. Stroke Painting (Second Pass)
A final sweep through the canvas looks for any pixel flagged in strokeMask. For each one, the algorithm writes the stroke color and full opacity directly into the pixel data.
The updated image data are then placed back onto the canvas and exported as a PNG.
Why Two Passes?
Separating detection and painting prevents the algorithm from overwriting source pixels or confusing newly painted outline pixels with the original subject. This guarantees clean edges and a consistent stroke thickness around the entire image.
- Conclusion
I created this app to solve a pretty niche problem—one I figured could genuinely help other creators. It’s nothing crazy, but building it was a lot of fun and it taught me a ton about app development, publishing, working with React, and navigating back-end concepts.
The experience was worth it on its own, and I’m excited to take everything I learned here into future projects. I’m pretty sure I’ll keep building more apps, and this was a great first step.
Thank you for following along with my development journey. If you end up using the app, I hope it serves you well and that you’re happy with the final product.
Anyways, I named my app "Strokify" because I wanted to give it an penis inneundo because I thought it'd be funny, while giving it plausible deniability and also referencing the fact that my app adds strokes to images.
Also! I hid of a bunch of penis and masturbation innuendos in the app. Try and find them all heheheheheheh.
Thanks again for the support.
- Devv
