15 min readweb-scraping

Laid Off? How to Scrape Job Boards, Company Data & Salary Info for Your Job Search

With 91,000 tech layoffs in 2026 so far, job seekers need every advantage. Learn how to scrape job boards, company directories, and salary data into a single spreadsheet to power your job search.

TL;DR

Tech layoffs in 2026 have already hit 91,000 workers, with Oracle cutting 20,000-30,000, Amazon laying off 16,000, and Block eliminating 4,000 roles. Instead of manually browsing dozens of job boards, you can use ScrapeMaster to scrape job listings, company details, and salary data into organized spreadsheets — turning a chaotic search into a structured, trackable process.

The 2026 tech layoff landscape

The tech industry is deep into another cycle of workforce reductions. The numbers for 2026 so far are staggering:

  • Oracle — 20,000 to 30,000 positions eliminated, one of the largest single-company layoffs in tech history
  • Amazon — 16,000 roles cut across multiple divisions
  • Block (Square/Cash App) — 4,000 positions eliminated
  • Dozens of smaller companies — Startups and mid-size firms across SaaS, fintech, and AI have collectively shed tens of thousands of additional jobs

If you are one of the 91,000 affected workers, your job search needs to be systematic. Browsing job boards one at a time, clicking through listings, and trying to remember what you have and have not applied to is a recipe for missed opportunities and burnout. Data-driven job searching is not just more efficient — it gives you a real edge.

Why scraping beats manual job searching

The fragmentation problem

Job listings are scattered across dozens of platforms:

  • General job boards — Indeed, LinkedIn Jobs, Glassdoor, ZipRecruiter
  • Tech-specific boards — Hacker News Who's Hiring, AngelList, Dice, Stack Overflow Jobs
  • Remote job boards — We Work Remotely, RemoteOK, FlexJobs, Remote.co
  • Company career pages — Direct listings that may not appear on aggregators
  • Industry niche boards — Specialized boards for DevOps, data science, product management, and other disciplines

No single board has every listing. Companies post to different boards, and aggregators miss significant numbers of positions. To see the full picture, you need to check multiple sources — and that is where scraping transforms the process.

What scraping gives you that browsing does not

  • Structured data — Instead of reading through page after page of listings, you get a spreadsheet with columns for title, company, location, salary, date posted, and link to apply
  • Deduplication — When you have all listings in one place, you can sort and filter to find duplicates across boards
  • Tracking — Add columns for application status, follow-up dates, and notes to turn your scraped data into a job search CRM
  • Comparison — Sort by salary range, filter by location, and compare companies side by side
  • Historical record — Export snapshots over time to see which companies are hiring consistently vs. one-off listings

How to scrape job listings step by step

Here is a practical workflow for building a comprehensive job search database using ScrapeMaster.

Step 1: Set up your search on a job board

Start with a major board like Indeed. Enter your target job title and location preferences. Apply any relevant filters — remote, salary range, date posted, experience level.

Step 2: Activate the scraper

Click the ScrapeMaster icon in your Chrome toolbar. The AI analyzes the page and automatically detects the data structure — job titles, company names, locations, salary ranges (when listed), posting dates, and links. This typically takes 2-4 seconds.

Step 3: Review and customize columns

The side panel shows an editable table with the detected data. You can:

  • Rename columns to match your preferred labels (for example, changing "Organization" to "Company")
  • Remove columns you do not need (for example, sponsored badges or ad indicators)
  • Reorder columns to put the most important information first

Step 4: Handle pagination

Most job boards show 15-25 results per page, and you want all of them. ScrapeMaster handles multiple pagination types:

  • Next-page buttons — It can automatically click through numbered pages or "Next" buttons
  • Load more buttons — For boards that use a "Load More" button to append results
  • Infinite scroll — For boards that load more results as you scroll down
  • Numbered pagination — Standard page navigation (page 1, 2, 3, etc.)

Let the scraper work through the pages while you get coffee. When it finishes, you have a complete dataset from that board.

Many job boards show only a summary on the listing page — title, company, and location — but the full description, requirements, salary, and benefits are on the detail page. ScrapeMaster can follow these links to extract deeper data from each individual listing, adding columns for:

  • Full job description text
  • Requirements and qualifications
  • Salary ranges or compensation details
  • Benefits information
  • Application deadlines
  • Hiring manager or recruiter information (when listed)

This is where the real power emerges. Instead of clicking into every listing manually, the extension follows the links and brings the data back to your table.

Step 6: Export and repeat

Export your data as CSV (for Google Sheets), XLSX (for Excel), or JSON (for programmatic use). Then move to the next job board and repeat the process.

Step 7: Combine into a master spreadsheet

Import all your exported files into a single spreadsheet. Add columns for:

  • Source — Which job board the listing came from
  • Date scraped — When you collected the data
  • Application status — Not applied, applied, phone screen, interview, offer, rejected
  • Follow-up date — When to check back
  • Notes — Anything relevant about the company or role

Now you have a job search database that you can sort, filter, and update as your search progresses.

Scraping company information for research

Knowing which companies are hiring is only half the battle. You also need to research those companies to prioritize applications and prepare for interviews.

Company directories and databases

Use ScrapeMaster to extract data from company information sources:

  • Crunchbase — Funding rounds, company size, leadership team, industry category
  • Glassdoor — Company ratings, interview experiences, salary data, pros and cons from employees
  • LinkedIn company pages — Employee count, recent hires, growth trends
  • Built In — Tech stacks, perks, and culture information for tech companies
  • Craft.co or Owler — Revenue estimates, competitor information, recent news

For each company that appears in your job listings, you can quickly scrape their profile page to add context columns to your spreadsheet: company size, funding stage, Glassdoor rating, industry vertical.

Building a company comparison table

A practical approach:

  • Start with the unique company names from your job listings
  • Visit each company's Glassdoor or Crunchbase page
  • Use ScrapeMaster to extract the key metrics
  • Join this data with your job listings spreadsheet by company name

This gives you a view like: "Software Engineer at Company X — Series C, 500 employees, 4.2 Glassdoor rating, $150K-$180K salary range" all in one row.

Scraping salary data

Salary transparency is improving, but many listings still omit compensation. You can fill in the gaps by scraping salary databases.

Sources for salary data

  • Levels.fyi — Detailed compensation data for tech roles, including base, stock, and bonus breakdowns
  • Glassdoor salary data — Self-reported salaries by title and company
  • Salary.com and PayScale — Broader salary databases with cost-of-living adjustments
  • H1B salary data (h1bdata.info) — Actual salaries from H1B visa applications, which are public record
  • State transparency databases — Some states publish public employee salaries, useful for government and education roles

How to scrape salary data effectively

Navigate to a salary comparison page, run ScrapeMaster to detect the table of salaries, and export. The AI typically detects columns like job title, company, base salary, total compensation, and location automatically.

You can then cross-reference this data with your job listings to estimate compensation for roles that do not list salary ranges. If a company is not transparent about pay, knowing what they have paid in the past (via H1B data or employee reports) gives you a significant advantage in negotiations.

Advanced job search scraping strategies

Monitor new listings over time

Do not scrape once and call it done. Job boards update daily. Set a weekly schedule to rescrape your target searches and compare with your existing data. New listings that were not in your previous export are fresh opportunities.

Scrape "Who's Hiring" threads

Hacker News posts a monthly "Who's Hiring" thread where companies post directly. These often contain roles that do not appear on traditional job boards. The thread format is perfect for scraping — each comment is a structured listing with company name, role, location, and requirements.

Extract data from company career pages

Many companies post roles on their own career pages before they appear on job boards. If you have a list of target companies, visit each career page and scrape their current openings. This can give you a head start on applications.

Track layoff data for opportunities

Paradoxically, companies conducting layoffs sometimes continue hiring for different roles. Use layoff tracking sites (Layoffs.fyi, TrueUp) to identify companies in transition — they may be eliminating some departments while aggressively hiring in others.

Building a personal job search CRM

The end goal is a single spreadsheet that functions as a lightweight CRM for your job search. Here is a practical column structure:

  • Job Title — The role title
  • Company — Company name
  • Location — City, remote, or hybrid
  • Salary Range — Listed or estimated
  • Date Posted — When the listing appeared
  • Source — Which job board or site
  • Company Size — Employees
  • Glassdoor Rating — From your company research scrape
  • Funding Stage — For startups (seed, Series A, etc.)
  • Apply Link — Direct link to the application
  • Status — Not applied / Applied / Phone Screen / Interview / Offer / Rejected
  • Date Applied — When you submitted
  • Follow-Up Date — When to check back
  • Notes — Contact names, interview impressions, etc.

This structure turns scattered job board browsing into an organized pipeline. You can sort by salary, filter by status, and track your overall application volume and conversion rates.

Exporting and sharing your data

ScrapeMaster exports to CSV, XLSX, and JSON. For most job seekers:

  • CSV works best for importing into Google Sheets (free, accessible from any device)
  • XLSX is ideal for Excel power users who want pivot tables and conditional formatting
  • JSON is useful if you want to build a custom dashboard or feed data into another tool

If you want to share a polished version of your research with a career coach or mentor, a Convert extension can turn your spreadsheet data into a clean PDF.

Scraping tips specific to major job boards

Indeed

Indeed uses a mix of organic and sponsored listings. ScrapeMaster will detect both. You may want to remove the "Sponsored" column or filter those out after export. Indeed paginates with numbered pages, which the extension handles automatically. Detail pages contain the full job description and often include salary estimates.

LinkedIn Jobs

LinkedIn shows job listings to logged-in users with more detail. When scraping LinkedIn, you are already authenticated in your browser session, so the extension sees what you see. LinkedIn uses infinite scroll for search results. Note that LinkedIn's data is particularly valuable because it often includes the hiring manager's name and how many applicants have already applied.

Glassdoor

Glassdoor requires login to see full data, but once you are logged in, ScrapeMaster can extract job listings with salary estimates, company ratings, and interview difficulty ratings. This is one of the richest data sources for job seekers because it combines listing data with company intelligence.

Remote job boards

Sites like We Work Remotely, RemoteOK, and FlexJobs tend to have simpler layouts that are easy to scrape. They typically show all relevant information on the listing page, reducing the need to follow detail links. The data often includes timezone requirements and remote policy details that general boards miss.

What to do with your data after scraping

Prioritize with scoring

Assign scores to listings based on your criteria. For example:

  • Salary above target: +2 points
  • Glassdoor rating above 4.0: +1 point
  • Remote or preferred location: +1 point
  • Company size in your preferred range: +1 point
  • Posted within last 7 days: +1 point

Sort by total score to focus your applications on the highest-value opportunities.

Identify patterns

Once you have a few hundred listings in your database, look for patterns:

  • Which job titles appear most frequently for your skill set?
  • What salary ranges are typical for your target roles?
  • Which cities or companies are hiring most actively?
  • What skills or qualifications appear repeatedly in descriptions?

These patterns can guide your resume customization, skill development priorities, and geographic preferences.

Track your pipeline

Treat your job search like a sales pipeline. Monitor metrics like:

  • Total applications submitted per week
  • Response rate (callbacks / applications)
  • Interview conversion rate
  • Time from application to first response

If your response rate is low, your scraped data can help you identify what successful listings have in common and adjust your targeting.

Ethical considerations for job search scraping

Job search scraping is one of the most clearly legitimate use cases for web scraping:

  • You are collecting public information — Job listings are published specifically to attract applicants. They are public by design.
  • Personal use — You are using the data for your own job search, not reselling it.
  • No circumvention — You browse job boards normally and the extension reads what is on the page.
  • Beneficial purpose — Helping laid-off workers find employment more efficiently is a broadly positive activity.

That said, be reasonable:

  • Do not scrape at speeds that overload job board servers
  • Do not republish large collections of listings
  • Respect any rate limits or access controls you encounter
  • Use the data for your own search, not for building a competing job board

Frequently asked questions

How do I scrape job listings without coding?

Install ScrapeMaster from the Chrome Web Store (free, no account needed). Navigate to any job board search results page. Click the extension icon and the AI auto-detects job titles, companies, locations, salaries, and links in 2-4 seconds. Edit the table in the side panel, handle pagination, and export to CSV, XLSX, or JSON.

Can I scrape Indeed and LinkedIn for job listings?

Yes. Both sites display job listings publicly (LinkedIn requires login for full details). A browser extension scrapes what you see in your browser, which is functionally equivalent to manually reading the listings. This is different from running automated server-side scrapers against these sites.

How do I combine data from multiple job boards?

Export each job board's data as CSV. Open a master spreadsheet and import each CSV as a separate sheet or append to a single sheet with an added "Source" column. Remove duplicate listings by sorting on job title and company name.

What salary data can I scrape?

Levels.fyi, Glassdoor, Salary.com, PayScale, and H1B salary databases all contain scrape-friendly salary data. The data is typically displayed in tables that ScrapeMaster detects automatically. Cross-reference with your job listings to estimate compensation for roles that do not list salary ranges.

How often should I rescrape job boards?

Weekly is a good cadence for most job seekers. New listings appear daily, but weekly scraping strikes a balance between freshness and effort. For high-demand roles or fast-moving markets, twice weekly may be worthwhile.

Can I track my applications in the scraped spreadsheet?

Yes. After exporting, add columns for application status, date applied, follow-up date, and notes. This turns your scraped data into a personal job search CRM that keeps everything in one place.

Bottom line

Being laid off is stressful enough without the chaos of a fragmented job market. With 91,000 tech workers already displaced in 2026, a systematic approach to job searching is not a luxury — it is a necessity. Scraping job boards, company data, and salary information into a structured spreadsheet gives you visibility across the entire market and helps you prioritize the best opportunities.

ScrapeMaster makes this process accessible to anyone, regardless of technical background. Click the icon, let the AI detect the data, customize your columns, handle pagination, and export. It is free, has no limits, requires no account, and works across every major job board. Combine it with a Convert extension to turn your research into shareable PDFs for career coaches or networking contacts.

Your next role is out there. Build the data advantage that helps you find it faster.

Try our free Chrome extensions

Privacy-first tools that actually work. No paywalls, no tracking, no data collection.