The release of highly sought-after blind boxes often concludes in an instant, leaving countless enthusiasts defeated in their struggle against automated scripts. By building a dedicated automation tool, ordinary collectors can achieve millisecond-level responses, real-time inventory monitoring, and compete on an equal footing with scalpers using professional tools. Are you ready to acquire those coveted rare collectibles?
Why Automate Blind Box Purchases?
Among Pop Mart's dazzling array of designer toys, the blind box series stands out with its unique charm. Crafted by artist Molly Yllom, this series cleverly incorporates complex emotional expressions, deeply resonating with collectors who value mental health and emotional connection.
This series has successfully launched several popular themes, such as "Crying Again," "Tears Factory," and "Crying for Love," and has collaborated with well-known brands like The Powerpuff Girls. Each theme typically includes 6 to 12 distinct designs, while the highly coveted secret editions appear approximately once every 72 to 288 blind boxes, with specific rarity depending on the series.
For blind box collectors, manual purchasing is extremely difficult, primarily due to the deep emotional attachment fans have to these artworks. These figures are not just simple consumer goods; they are meaningful components of personal collections. This strong emotional connection leads to market demand far exceeding actual supply, making automated purchasing a crucial strategy to ensure successful acquisition.
Why Manual Blind Box Purchases Fail
A deeper understanding of the technical barriers faced by manual purchasing in blind box releases reveals the indispensable nature of automation tools.
Pop Mart's official website deploys multiple layers of complex anti-automation mechanisms, including CAPTCHA recognition, IP-based access frequency restrictions, and regional access controls. During peak purchasing periods when product demand surges, servers often become overloaded due to immense traffic, leading to extremely slow page loading or even complete crashes, making manual operations almost impossible.
The blind box sales model further intensifies competition because consumers cannot determine the specific design before purchasing. Many collectors, aiming for a desired specific design or a rare secret edition, choose to buy in bulk, which undoubtedly multiplies market demand, causing items to sell out in mere minutes or even seconds.
In contrast, professional market speculators (commonly known as "scalpers") generally employ highly automated trading systems, capable of processing at millisecond speeds. Ordinary consumers, relying solely on traditional browsers for manual operations, simply cannot compete with the speed of automated systems. Therefore, developing a dedicated automated purchasing bot has become the only effective way to ensure successful acquisition of limited edition blind boxes.
Core Components for Building a Blind Box Sniping Bot
To successfully acquire your desired blind box figures during Pop Mart's limited releases, you need to meticulously integrate a series of synergistic technical elements.
Python Programming Language
Python is the core pillar for building an automated purchasing system, providing the programming foundation required to execute automation scripts. With its excellent flexibility, Python can effectively handle the complex architectural design of the Pop Mart website and quickly adapt to dynamic changes in inventory status.
Playwright Browser Automation Framework
Playwright will be your powerful tool for browser automation, allowing your scripts to operate on the Pop Mart website in a highly simulated manner. Whether it's handling complex JavaScript pages, simulating button clicks, or performing other navigation behaviors, Playwright can execute precisely like a real user, guiding the purchase process to completion without manual intervention.
Chrome or Firefox Browser Drivers
These drivers provide the runtime environment for Playwright. Ensuring that the browser version you use is compatible with the corresponding driver is crucial for smooth execution of automated processes during critical purchasing moments.
Nstproxy Residential Proxy Service
Nstproxy residential proxy service provides crucial protection for your automated operations by dynamically changing IP addresses from real home devices. This mechanism effectively prevents detection by website anti-cheat systems, ensuring your access rights are unaffected during high-traffic releases. It efficiently bypasses website IP restrictions and various anti-crawler strategies, making your purchasing behavior appear natural and undetected.
Pop Mart User Account
Some product purchases require linking to a Pop Mart account. Having and managing multiple active accounts will significantly increase your chances of successfully acquiring limited edition items in fierce competition, as each account represents an independent purchasing channel.
Analyzing Pop Mart's Checkout Process
To successfully build an efficient and reliable blind box sniping bot, a deep understanding of Pop Mart's underlying architecture and its checkout process is essential.
Product pages typically follow a unified URL structure, containing series identification codes and specific product codes. For example, blind box products often appear under paths like "/us/products/[product_number]/CRYBABY…", which provides a convenient location basis for automated monitoring and data analysis.
Real-time changes in inventory status will trigger different feedback in page display and interaction, and your bot must be able to accurately identify these changes. For instance, when an item is out of stock, the "Add to Cart" button will become unavailable; while in-stock items will display operable purchase options and update inventory information in real-time.
Pop Mart's anti-bot defense mechanisms are present throughout various stages of the purchase process. Therefore, your automation system needs to be capable of handling CAPTCHAs, validating user session effectiveness, and simulating browsing behavior patterns indistinguishable from human users to effectively circumvent these security detections.
Building the Core Bot Architecture
To maximize the success rate of purchases, you need to build a reliable bot capable of automating the blind box purchasing process.
Step 1: Environment Setup
First, ensure your operating environment is ready and Python and its necessary dependencies are installed:
-
Install Python. Visit the official Python download page to get the latest version. During installation, be sure to check the "Add Python to PATH" option for easier script execution later.
-
Install Playwright. Execute the following pip command in the terminal to install Playwright, and then run its accompanying command to download the required browser drivers:
pip install playwright python -m playwright install
-
Choose a Task Scheduler. Given that your automation script needs to perform specific operations at preset times, choosing a stable and reliable task scheduler is crucial. APScheduler is a highly recommended excellent option:
pip install apscheduler
Step 2: Project Directory Planning
Create a dedicated folder to organize your project files, including all scripts and generated data. To maintain code cleanliness and environmental isolation, it is strongly recommended to use a Python virtual environment. Navigate to your newly created project directory in the terminal:
cd path/to/your/project
Your bot system will consist of multiple files. You can choose to create all files at once or add them incrementally as development progresses. For clarity, here is a suggested project structure example:
popmart-bot (main directory)
- data (data storage directory)
- products.json (product information)
- job-scheduler.py (task scheduling script)
- main.py (main entry script)
- popmart-scraper.py (product information scraping script)
- purchase-bot.py (purchase execution script)
Step 3: Main Control Script Development
First, create a core entry point for your bot system. Create a new file named main.py
and add the following Python code:
import subprocess
import time
from apscheduler.schedulers.blocking import BlockingScheduler
from datetime import datetime, timedelta
# Define maximum retries and retry delay for the scraper
MAX_RETRIES = 5
RETRY_DELAY = 10
# Set the daily scraper run time (e.g., 6:00 AM)
HOUR = 6
MINUTE = 0
scheduler = BlockingScheduler()
def execute_daily_scraper():
# This function is responsible for starting popmart-scraper.py and triggering job-scheduler.py upon its completion.
print(f"\nStarting popmart-scraper, current time: {datetime.now().strftime(\\'%H:%M:%S\\')}")
for attempt_num in range(1, MAX_RETRIES + 1):
print(f"Attempting to run scraper, attempt {attempt_num}...")
try:
subprocess.run(["python3", "popmart-scraper.py"], check=True)
print("New product information scraping completed successfully.")
# After successful scraping, immediately schedule job-scheduler to run
scheduled_run_time = datetime.now() + timedelta(seconds=5)
scheduler.add_job(execute_job_scheduler, trigger=\\'date\\', run_date=scheduled_run_time)
print(f"job-scheduler.py will start at {scheduled_run_time.strftime(\\'%H:%M:%S\\')}.")
return # Exit loop early on success
except subprocess.CalledProcessError as e:
print(f"Scraper run failed (attempt {attempt_num}), exit code: {e.returncode}")
if attempt_num < MAX_RETRIES:
print(f"Retrying in {RETRY_DELAY} seconds...")
time.sleep(RETRY_DELAY)
print("All scraper run attempts failed. Please check popmart-scraper.py for issues.")
def execute_job_scheduler():
print(f"\nRunning job-scheduler.py")
try:
subprocess.run(["python3", "job-scheduler.py"], check=True)
except subprocess.CalledProcessError as e:
print(f"Task scheduler run failed, exit code: {e.returncode}")
print("Please check job-scheduler.py for issues.")
if __name__ == "__main__":
print("main.py script started...")
execute_daily_scraper() # Execute scraper immediately when script starts
# Configure daily scraper scheduled task
scheduler.add_job(execute_daily_scraper, \\\'cron\\', hour=HOUR, minute=MINUTE)
print(f"Daily new product scraper scheduled to run every day at {HOUR:02d}:{MINUTE:02d}.")
try:
scheduler.start()
except (KeyboardInterrupt, SystemExit):
scheduler.shutdown()
print("Task scheduler gracefully shut down.")
The core functions of this main script include:
- Initialize scraper task.
main.py
will immediately executepopmart-scraper.py
upon startup to fetch the latest product information. - Linked task scheduling. Once the scraper task is successfully completed, it will automatically trigger
job-scheduler.py
, which is responsible for processing and arranging subsequent purchase tasks. - Robust retry mechanism. If
popmart-scraper.py
fails during execution, the script will wait for a preset 10 seconds before automatically retrying, up to 5 attempts, to improve the success rate of the task. - Daily scheduled scraping. The script uses cron-style scheduling to ensure
popmart-scraper.py
runs automatically at a specified time each day, continuously updating product data.
Step 4: New Product Page Data Scraping
Next is the detailed content of the popmart-scraper.py
script, with the key part being the integrated configuration of the Nstproxy proxy:
import asyncio
import json
import os
from playwright.async_api import async_playwright
import sys
# Define target keywords for filtering relevant products
TARGET_KEYWORDS = ["CRYBABY", "Crybaby"]
# Base URL of the Pop Mart website
BASE_URL = "https://www.popmart.com"
# Output file path for saving scraped product data
OUTPUT_FILE = os.path.join("data", "products.json")
# Nstproxy proxy service configuration (please replace with your actual credentials)
# This information can usually be found in the "Proxy Setup" section of your Nstproxy user dashboard
NSTPROXY_USERNAME = "your_nstproxy_username"
NSTPROXY_PASSWORD = "your_nstproxy_password"
NSTPROXY_HOST = "gate.nstproxy.io" # Nstproxy gateway address, may vary
NSTPROXY_PORT = "24125" # Nstproxy port number, may vary
async def scrape_popmart_new_arrivals():
print("New product information scraping task started...")
try:
async with async_playwright() as p:
# Construct the Nstproxy proxy server connection string
proxy_server_address = f"http://{NSTPROXY_USERNAME}:{NSTPROXY_PASSWORD}@{NSTPROXY_HOST}:{NSTPROXY_PORT}"
# Launch Chromium browser instance and configure Nstproxy proxy
browser_instance = await p.chromium.launch(
headless=True, # Set to True to run in headless mode for efficiency
proxy={
"server": proxy_server_address
}
)
# Create a new browser context and configure the proxy again to ensure all requests go through the proxy
browser_context = await browser_instance.new_context(
proxy={
"server": proxy_server_address
}
)
page_instance = await browser_context.new_page()
# Navigate to Pop Mart new arrivals page, set timeout
await page_instance.goto("https://www.popmart.com/us/new-arrivals", timeout=30000)
# Wait for specific elements to load, indicating page content is mostly rendered
await page_instance.wait_for_selector("div.index_title__jgc2z")
# Attempt to handle and close potential location selection pop-up window
try:
location_popup_selector = "div.index_siteCountry___tWaj"
# Briefly wait for pop-up to appear, no exception if not present
await page_instance.wait_for_selector(location_popup_selector, timeout=2000)
await page_instance.click(location_popup_selector)
print("Location selection pop-up closed successfully.")
except Exception:
print("No location selection pop-up detected, continuing execution.")
# Attempt to handle and close potential policy acceptance window
try:
policy_accept_selector = "div.policy_acceptBtn__ZNU71"
# Wait for policy acceptance button to be visible
await page_instance.wait_for_selector(policy_accept_selector, timeout=8000, state="visible")
policy_button = await page_instance.query_selector(policy_accept_selector)
if policy_button:
await asyncio.sleep(1) # Give a small buffer time to ensure JavaScript is loaded
await policy_button.click()
print("Policy acceptance button clicked successfully.")
else:
print("Could not find policy acceptance button.")
except Exception as e:
print(f"Policy acceptance pop-up did not appear or click failed: {e}")
collected_results = []
# Find all sections containing new product information
info_sections = await page_instance.query_selector_all("div.index_title__jgc2z")
for section_element in info_sections:
release_date_text = (await section_element.text_content()).strip()
# Get the product list container adjacent to the current section
next_sibling_element = await section_element.evaluate_handle("el => el.nextElementSibling")
product_card_elements = await next_sibling_element.query_selector_all("div.index_productCardCalendarContainer__B96oH")
for card_element in product_card_elements:
# Extract product title
title_span = await card_element.query_selector("div.index_title__9DEwH span")
product_title = await title_span.text_content() if title_span else ""
# Filter products by keywords
if not any(keyword.lower() in product_title.lower() for keyword in TARGET_KEYWORDS):
continue
# Extract release time
time_div = await card_element.query_selector("div.index_time__EyE6b")
release_time_text = await time_div.text_content() if time_div else "N/A"
# Extract product URL
link_element = await card_element.query_selector("a[href^=\\'/us\\']")
product_href = await link_element.get_attribute("href") if link_element else None
full_product_url = f"{BASE_URL}{product_href}" if product_href else "N/A"
# Organize scraped data
product_entry = {
"title": product_title.strip(),
"release_date": release_date_text.strip(), # e.g., "Upcoming JUL 11"
"release_time": release_time_text.strip(), # e.g., "09:00"
"url": full_product_url
}
collected_results.append(product_entry)
await browser_instance.close()
# Save scraping results as a JSON file
os.makedirs("data", exist_ok=True)
with open(OUTPUT_FILE, "w", encoding="utf-8") as f:
json.dump(collected_results, f, indent=2, ensure_ascii=False)
print(f"Successfully scraped {len(collected_results)} matching products. Data saved to {OUTPUT_FILE}")
except Exception as e:
print(f"Error during new product scraping: {e}")
sys.exit(1) # Exit with error code 1 on task failure
if __name__ == "__main__":
asyncio.run(scrape_popmart_new_arrivals())
The main function of this script is to visit Pop Mart's "New Arrivals" page and extract product release schedule information. It will save the product name, release date, specific time, and corresponding URL to the data/products.json
file.
In addition, this script has the following features:
- Intelligent handling of website pop-ups and navigation. It can automatically identify and close potential location selection and policy acceptance pop-up windows, ensuring smooth page access.
- Integrated Nstproxy proxy service. All network requests will be routed through the pre-configured Nstproxy proxy server, effectively bypassing website IP restrictions and rate limiting, ensuring the stability and anonymity of the scraping process.
- Keyword filtering mechanism. It only collects product information where the title contains "CRYBABY" or "Crybaby" keywords, ignoring other irrelevant products, thereby improving the accuracy of data scraping.
Step 5: Task Scheduler Configuration
The job-scheduler.py
script is the heart of the entire automated purchasing system, responsible for the core task scheduling logic:
import json
from datetime import datetime
from apscheduler.schedulers.background import BackgroundScheduler
import subprocess
import os
import time
# Define data file path and retry parameters
DATA_FILE = os.path.join("data", "products.json")
MAX_RETRIES = 5
RETRY_DELAY = 10
def parse_product_release_datetime(date_string, time_string):
# Converts strings like "Upcoming JUL 11" and "09:00" to datetime objects, defaulting to the current year.
try:
# Clean up irrelevant keywords from the date string
for keyword_to_remove in ["Upcoming", "In Stock"]:
date_string = date_string.replace(keyword_to_remove, "").strip()
# Combine date, year, and time strings, and parse into a datetime object
full_datetime_string = f"{date_string} {datetime.now().year} {time_string}"
# Example format: "JUL 11 2025 09:00"
return datetime.strptime(full_datetime_string, "%b %d %Y %H:%M")
except Exception as e:
print(f"Failed to parse datetime, source string: \\'{date_string} {time_string}\\', error: {e}")
return None
def initiate_purchase_bot(product_details):
# Launches the purchase-bot.py script with built-in retry logic
product_url = product_details.get("url")
product_title = product_details.get("title")
for attempt_count in range(MAX_RETRIES + 1): # Includes the first attempt as well as retries
print(f"Launching purchase bot for product \\'{product_title}\\' (attempt {attempt_count + 1}/{MAX_RETRIES + 1})...")
try:
# Assume purchase-bot.py is correctly configured with Nstproxy proxy
subprocess.run(["python3", "purchase-bot.py", product_url], check=True)
print(f"Successfully launched purchase bot for product \\'{product_title}\\'.")
return # Exit immediately on success
except subprocess.CalledProcessError as e:
print(f"Purchase bot run failed (attempt {attempt_count + 1}), exit code: {e.returncode}")
if attempt_count < MAX_RETRIES:
print(f"Retrying in {RETRY_DELAY} seconds...")
time.sleep(RETRY_DELAY)
print(f"All attempts to launch purchase bot for product \\'{product_title}\\' failed.")
if __name__ == "__main__":
scheduler_instance = BackgroundScheduler()
if not os.path.exists(DATA_FILE):
print(f"Error: Data file {DATA_FILE} does not exist. Please run popmart-scraper.py script first to generate data.")
sys.exit(1)
with open(DATA_FILE, "r", encoding="utf-8") as f:
all_products = json.load(f)
for product_item in all_products:
release_full_datetime = parse_product_release_datetime(product_item["release_date"], product_item["release_time"])
if release_full_datetime and release_full_datetime > datetime.now():
scheduler_instance.add_job(initiate_purchase_bot, \\\'date\\', run_date=release_full_datetime, args=[product_item])
print(f"Successfully scheduled purchase bot task for product \\'{product_item[\\\\'title\\\']}\\' to start at {release_full_datetime.strftime(\\'%Y-%m-%d %H:%M:%S\\')}.")
try:
scheduler_instance.start()
print("Task scheduler started successfully. Waiting for scheduled tasks to execute...")
# Keep the main thread alive to ensure the background scheduler runs properly
while True:
time.sleep(2)
except (KeyboardInterrupt, SystemExit):
scheduler_instance.shutdown()
print("Task scheduler stopped running.")
The core function of this script is to read product information stored in products.json
and dynamically create a purchase task for each upcoming product release. It accurately parses the product's release date and time, and automatically launches the purchase-bot.py
script at the preset release moment to execute the purchase operation.
Step 6: Purchase Execution Bot Development
The purchase-bot.py
script is the key component that actually executes the purchase action. It will leverage the Playwright browser automation framework, combined with the Nstproxy proxy service, to simulate real user behavior and complete product sniping.
import asyncio
import sys
from playwright.async_api import async_playwright
# Nstproxy proxy service configuration (please replace with your actual credentials)
NSTPROXY_USERNAME = "your_nstproxy_username"
NSTPROXY_PASSWORD = "your_nstproxy_password"
NSTPROXY_HOST = "gate.nstproxy.io" # Nstproxy gateway address
NSTPROXY_PORT = "24125" # Nstproxy port number
async def execute_purchase_process(target_product_url):
print(f"Attempting to access product page: {target_product_url}")
try:
async with async_playwright() as p:
# Construct the full Nstproxy proxy server address, including authentication information
proxy_connection_string = f"http://{NSTPROXY_USERNAME}:{NSTPROXY_PASSWORD}@{NSTPROXY_HOST}:{NSTPROXY_PORT}"
# Launch Chromium browser and configure proxy. headless=False to observe bot behavior.
browser_instance = await p.chromium.launch(
headless=False,
proxy={
"server": proxy_connection_string
}
)
# Create a new browser context to ensure proxy settings are effective throughout the session
browser_context = await browser_instance.new_context(
proxy={
"server": proxy_connection_string
}
)
page_instance = await browser_context.new_page()
# Navigate to the target product page, set a longer timeout for network delays
await page_instance.goto(target_product_url, timeout=60000)
# Wait for the "Add to Bag" button to appear and click it
# Note: Adjust this selector based on the actual HTML structure of the Pop Mart website
add_to_bag_button_selector = "button:has-text(\\"ADD TO BAG\\")"
await page_instance.wait_for_selector(add_to_bag_button_selector, timeout=30000)
await page_instance.click(add_to_bag_button_selector)
print(f"Successfully clicked \\"ADD TO BAG\\" button for product: {target_product_url}")
# Wait for the shopping cart page to load and click to enter the cart
# Note: Adjust this selector based on the actual HTML structure of the Pop Mart website
shopping_cart_selector = "a[href*=\\'/cart\\']"
await page_instance.wait_for_selector(shopping_cart_selector, timeout=30000)
await page_instance.click(shopping_cart_selector)
print("Successfully navigated to the shopping cart page.")
print("Browser remains open for you to manually complete the checkout process.")
# Keep the browser window open for the user to intervene and complete sensitive operations like payment
await page_instance.wait_for_timeout(3600000) # Set 1 hour wait time to give the user ample time to operate
await browser_instance.close()
return 0 # Return 0 for successful purchase process
except Exception as e:
print(f"Error during purchase process: {e}")
return 1 # Return 1 for failed purchase process
if __name__ == "__main__":
if len(sys.argv) < 2:
print("Usage: python3 purchase-bot.py <product_URL>")
sys.exit(1)
product_url_from_args = sys.argv[1]
exit_code_result = asyncio.run(execute_purchase_process(product_url_from_args))
sys.exit(exit_code_result)
This script's function is to receive a product URL as an argument, then use Playwright to visit that product page, simulate clicking the "Add to Bag" button, and finally navigate to the shopping cart page. The browser will remain open so that the user can manually log in and complete the final payment steps.
Step 7: Bot System Startup
To start your blind box sniping automation system, simply run the main.py
script in your terminal:
python main.py
Advanced Strategies for High-Competition Releases
When facing fiercely competitive blind box releases, relying solely on basic automation features may not be enough to ensure success. Here are some advanced strategies that can help your bot system gain a significant advantage:
Multi-Account Coordination
By coordinating purchases across multiple accounts simultaneously, you can significantly increase the chances of success in limited releases. A robust account management system can effectively handle authentication and checkout processes across multiple user profiles, leading to broader coverage and higher success rates.
Predictive Purchase Decisions
Utilizing machine learning models to conduct in-depth analysis of historical sales data and current market trends can predict the optimal purchase timing. Such advanced systems can even trigger purchase commands before product inventory is visibly displayed, seizing fleeting opportunities.
Precise Inventory Forecasting
By analyzing product replenishment cycles and supply chain information, it's possible to predict when sold-out items might be restocked. Advanced bots equipped with this feature can precisely locate and capitalize on these replenishment opportunities, which are often imperceptible to manual operators.
Community Intelligence Integration
Continuously monitoring collector forums, social media platforms, and trading communities can help you acquire exclusive insider information about upcoming releases and market dynamics, providing valuable data support for your sniping strategy.
Best Practices for Testing and Deployment
To ensure the reliability and success rate of your blind box sniping bot during actual releases, a systematic testing and deployment strategy is indispensable.
Sandbox Environment Testing
Testing in a low-risk sandbox environment is the first step to verify the stability of your bot's core functions. You can choose some non-popular items or off-peak times to simulate the complete purchase process. This helps to identify and fix potential logical errors before actual sniping, avoiding costly mistakes during high-value item purchases.
Performance Bottleneck Analysis and Optimization
In high-concurrency sniping scenarios, system response speed and resource utilization are key to success. You need to deeply analyze the bot's performance bottlenecks and optimize them specifically, ensuring that every step runs with maximum efficiency.
Real-time Monitoring and Alert System
Deploying a comprehensive monitoring and alert system allows you to grasp the bot's operational status in real-time. Once any anomaly or failure occurs, the system should immediately send you an alert, so you can quickly respond and take measures to ensure the smooth progress of the sniping task.
Disaster Recovery and Backup Solutions
To prevent single points of failure from causing the entire sniping task to fail, you should prepare multiple disaster recovery and backup solutions for core systems. By deploying multiple bot instances in different environments, you can effectively improve the system's robustness and reliability.
Conclusion
By building an automated sniping bot integrated with the Nstproxy proxy service, you will be able to turn passive into active, transforming frustrating manual purchasing experiences into strategic and intelligent contests. A powerful system that combines intelligent monitoring, advanced anti-detection techniques, and a stable proxy network will be your sharpest tool to stand out in the competitive blind box market and acquire your beloved collectibles.
The road to success is paved with extreme attention to technical details, rigorous testing and validation, and continuous insight into and adaptation to anti-bot strategies of platforms like Pop Mart. Undoubtedly, the investment in automated sniping will bring you rich rewards—those once unattainable rare collectibles are now within reach.
Now, let's put theory into action together, build your own automated sniping solution, and turn repeated purchasing failures into successes and joys. In this vibrant and passionate world of blind box collecting, a well-crafted bot will help you always stay ahead, ultimately embracing those works that resonate deeply with your heart.