
This page helps you quickly determine whether an API relay station is stably forwarding real model APIs, or if there are issues like packaging, disguising, protocol inconsistencies, or capability dilution. We conduct cross-validation through multiple probe requests, examining response structure, knowledge performance, identity consistency, chain-of-thought traces, and signature fingerprints. This is not a legal audit report, nor does it promise 100% accuracy. However, before purchasing, integrating, or long-term using a relay station, it provides a low-cost technical verification to reduce risks.
Detection results may show several types of verdicts — please read carefully: [Channel: aws-bedrock / vertex] When signature analysis detects aws-bedrock or vertex channels, it means the API is not from Claude's official channel but reverse-proxied from another platform. • aws-bedrock — Likely from AWS Claude service • vertex — Could be from Google Vertex AI's official API, or from reverse-proxy services like Antigravity. Performance is essentially identical to Claude Console; decide based on the overall score. [Verdict: Possibly reverse-proxied] When this verdict appears with signature issues, it indicates a lower-tier reverse-proxy channel, possibly from platforms like Kiro, Warp, or Windsurf. These channels have poor stability and security — use with caution. [Verdict: Not Claude] ⚠️ IMPORTANT! IMPORTANT! IMPORTANT! When LLM fingerprint verification fails, the API is absolutely NOT returning a Claude model. This typically means someone is passing off GLM, Codex, or other models as Claude. DO NOT USE IT! DO NOT USE IT! DO NOT USE IT! This is the most serious form of fraud. Stop using it immediately and report to the relay station.
The founder of CCTest has years of deep experience in Claude reverse-engineering channels, having operated infrastructure that once carried nearly one-third of the internet's Claude reverse-proxy traffic. This gives us an exceptionally deep understanding of Claude API protocol details, signature mechanisms, and channel fingerprints. It was precisely because we witnessed so many cases of impersonation, model substitution, and signature forgery that dozens of relay station operators petitioned for a reliable, continuous third-party detection platform. CCTest was born from this need — not a theoretical security tool, but professional detection capability forged in real-world combat. We understand counterfeiting techniques better than anyone, which is exactly why we can identify them more precisely.
We do not make commercial recommendations for any relay station. This platform only provides technical detection capabilities to help users judge the authenticity and reliability of relay stations. When choosing a relay station, prioritize providers with good community reputation, long operating history, and transparent pricing.
An API relay station is a proxy service that receives your API requests, forwards them to official model APIs (such as Anthropic, OpenAI), and returns the results to you. Relay stations typically offer cheaper prices and access without VPN, but may also have risks of counterfeiting, downgrading, or data security issues.
Common risks include: • Model counterfeiting: Claiming to be Claude Opus but actually using a cheaper model • Signature forgery: Tampering with or forging signature information in model responses • Data leakage: Relay stations may log your API Key and conversation content • Service instability: Depending on third-party relay, availability cannot be guaranteed • Protocol inconsistency: Response format differs from the official API
We use a black-box detection approach with 6 core checks: 1. LLM Fingerprint — Verify if the model is actually Claude 2. Stream Structure — Validate SSE event sequences against official specifications 3. Non-stream Structure — Verify JSON response structure 4. Signature Analysis — Parse Protobuf signatures, identify channel sources 5. Signature Verification — Verify signatures haven't been tampered with via official API 6. Multimodal — Test image and document recognition capabilities
Yes, the detection process sends approximately 4-6 requests to your provided API endpoint (stream, non-stream, multimodal requests, etc.), each consuming a small amount of tokens. Total consumption is approximately 200-500 input tokens and 500-1500 output tokens, typically costing less than $0.05.