As a new developer I am constantly trying new technologies and recently have really enjoyed coding in NextJS 13 using the /app
directory.
Use Case
I wanted to be able to show total ALL TIME GitHub commits across my repositories in the footer section of my portfolio. Why, because I can ๐๐ค. As I started working on the code I realized that this was going to be an expensive tasks from a resource perspective.
Assumptions
- You are familiar with nextJS 13/React Frameworks
- You are familiar with Vercel and getting your project linked using their CLI or gitHub Repository.
Limitations
As of 4/19/2023 Vercel has the following limitations based on your plan:
Let's Create a Cron Job!
- Create a directory in the
/app/api
folder called "cron" Create a folder within
cron
with the name of the cron job you want to run.example:
/app/api/cron/github-metrics-sync
Create a file called
route.ts
- Your code should look like this: ```javascript // /api/cron/github-metrics-sync/route.ts import { NextResponse } from 'next/server'
export async function GET() { ... }
Where `...` you replace with code you want to run when the cron job is executed.
For my use case I am storing the metrics in a [PlanetScale](https://www.planetscale.com) database. I then fetch the data from the GitHub API, parse through the data, and then update the database.
Let's see the full code for my route:
```javascript
import { Octokit } from '@octokit/rest'
import { NextResponse } from 'next/server'
import { updateGithubMetrics } from '../../../../lib/planetscale'
// zod env type checking
import { env } from 'env'
export async function GET() {
const octokit = new Octokit({
auth: env.GITHUB_TOKEN,
})
// retrieve all repos from my account
const repos = await octokit.request('GET /user/repos', {
per_page: 100,
affiliation: 'owner',
})
// count all repos
const totalRepos = repos.data.length
// retrieve all commits and count them
let totalCommits = 0
for (const repo of repos.data) {
const commits = await octokit.request(
'GET /repos/{owner}/{repo}/contributors',
{
owner: repo.owner.login,
repo: repo.name,
}
)
// only count commits from my account
if (commits.data.length > 0) {
for (const contributor of commits.data) {
if (contributor.login === 'chris-nowicki') {
totalCommits += contributor.contributions
}
}
}
}
try {
// update the planetscale database with new metrics
updateGithubMetrics(totalCommits, totalRepos)
return NextResponse.json({ totalCommits, totalRepos })
} catch (error) {
console.error(error)
return NextResponse.json({ error })
}
}
I won't go over all the lines of code in this tutorial but if you have any questions please let me know!
FINAL STEP
Add a vercel.json
file in the root of your project folder:
{
"crons": [
{
"path": "/api/cron/github-metrics-sync",
"schedule": "*/10 * * * *"
}
]
}
path is the location of the folder we created in step #2. schedule is telling vercel how often the cron job should run. */10 * * * *
is running the cronjob about every 10 minutes. You can play around with different schedules using crontab guru.
once that is complete deploy your project to vercel and you can check the status of your cron job by going to the settings > crons menu for your project. You should see something like this:
9/11/23 UPDATE
There have been some updates to Next.js where cron job routes are being rendered as static routes. This will cause your data to be fetched at build time, but then anytime after that the data that is fetched is the data that was generated at build time. You can check this by looking at your Vercel Build log. Here is an example of my GitHub cron job route in my vercel build log:
The route shows that it is a server component. However, if your's shows a circle, indicating a static route, you'll need to update your code. Add a route segment config before the route function:
// Nextjs route segment config
export const dynamic = 'force-dynamic' // Force dynamic (server) route instead of static page
export async function GET() {
...your cron job code here
}
NEXT STEPS
Checkout my article on how to secure vercel cron job routes.
CONCLUSION
I hope this was helpful! I found this to be a nice solution for the server to take on the expensive task so those visiting my site don't have to wait for the fetch/parsing from GitHub API for the /about page to load.