Welcome to the first post in my ultimate guide to scraping your own auto accept list. For this particular tutorial, I will be focusing on building an auto accept list designed around GSA Search Engine Ranker but in theory, you are able to duplicate this process to build auto accept lists for tools such as Scrapebox too.
This whole process can be completed using nothing but GSA Search Engine Ranker if required but throughout the various posts for this guide, I will be cover various tips and tricks using other tools that can be used to speed up and further optimize the process. Even with these tips and tricks, you will be required to sacrifice your time to save money, if you rather save your time then it may be worth looking into investing in a premium auto accept list.
What Is An Auto Accept List?
An auto accept list is a list of target domains that you have either gathered yourself or paid someone to gain access to their list. The domains have human moderation turned off meaning you are able to automatically submit content to them and gain a backlink to your site without someone checking that the content you are submitting is high quality. They can be extremely effective for tiered link building, depending on the content management system running on the target domain these sites can also be used on your tier one all the way through to your bottom tier.
Although human moderation for these domains has been turned off it does not necessarily mean they are low quality. Some of the domains are protected by services such as ReCaptcha and Solve meaning people who only use captcha tools such as GSA Captcha Breaker or OCR Captcha service will struggle to post to them. This can help keep their metrics higher and allow you to take advantage of this process I shared on how to specifically target these higher quality domains if you wish.
The Advantages Of Haveing An Auto Accept List!
There are a number of advantages to having your own auto accept list that you can turn to when required. The majority of my experience with auto accept lists has been around building out two and three tier link pyramids created on nothing but my auto accept domains and using them to funnel link juice to my money sites. This is a relatively easy process to set up in GSA Search Engine Ranker and once you have completed the initial setup of your projects, you can leave it to build out your tiered link pyramid indefinitely.
I also have a fair amount of experience with creating web 2.0 pages on my tier one. I usually use an automated tool to create them for me but I have also outsourced their creation to this service as it offers a stronger list of domains that tend to have a better stick rate. I then turn to my auto accept list and similar to the above example I build out a tier two and tier three below the web 2.0 pages to power them up and push link juice to my sites. Some guest post service providers will also allow you to do this to your posts on their sites, be sure to ask permission as they may remove your post if they detect automated links being built to it.
Another use for an auto accept list is to diversify the backlink profile to your sites or dilute your anchor text profile to mask your power links. For example, say you have link juice coming from either a private blog network, guest posts or a tiered up web 2.0 pyramid and you are either putting a lot of effort into powering them up or you have paid money for the links. There is a high chance you will want an exact match, secondary keyword or LSI keyword anchor text link from these power pages. You can then turn to your auto accept list to build out additional tier one links to help hide your power links or dilute your anchor text profile to avoid over optimization penalties.
The Processes Involved In Building An Auto Accept List!
Creating your own verified list can be broken down into three main phases that complete the overall process. These are acquisition, identification, and verification as I have previously touched on, GSA Search Engine Ranker has the ability to complete all of these phases without any additional help. Unfortunately, doing this does have the drawback of using SERs resources inefficiently as there are better tools out there for the various phases that can leave SER to focus on the verification of the targets.
The initial phase is all about acquiring the initial targets to begin the process. There are two main ways to gather these targets, footprint scraping and link extraction. Both methods have their advantages and disadvantages that I will cover a little later in this article but if possible you should be taking advantage of both as much as possible!
Footprint scraping is designed around scraping search engines for domains running specific content management systems by running search queries that are common on the CMS such as “Powered By WordPress”. Link extraction is based around scraping a page for its external links and saving them for later processing.
Target pooling groups used to be popular a few years back but it seems to have since died out. In my opinion, this happened due to people optimizing their own link acquisition and the increase in premium list services. This process was still based around link extraction and footprint scraping but the tasks would be separated between a small group of people with all of the acquired targets shared between the group.
For example, one person would focus his efforts on nothing but footprint scraping the stronger engines supported within SER such as BuddyPress, Drupal, and WordPress. A second member of the group may focus their resources around footprint scraping the other contextual engines after filtering their footprints to optimize their time as I explain in this post. A third member may use their resources to scrape non-contextual engines such as blog and image comments with a fourth person using their resources to pick up as much of everything as possible using link extraction.
The second phase of the process is based around taking the acquired targets from phase one and identifying them into one of three main groups. Desired usable targets, undesired usable targets and unusable targets. The first two groups are controlled by the user when deciding what content management systems or link types they want to acquire but the third group is decided by the tool and external factors beyond your control.
For example, say we are using both footprint scraping and link extraction to build a list from scratch for GSA Search Engine Ranker. Our desired usable targets platforms may be articles, social networks, wikis, blog comments, image comments, and guestbooks. Our undesired usable targets maybe platforms such as exploit, RSS, indexer, pingback, and referrers. Our unusable targets are content management systems or domains with custom designs that are not supported by SER that we have no control over.
The third and final phase of the process is based around taking your desired usable targets from the identification phase and pushing them through SER to see if you are able to post to them or not. Your total verified link yield can change massively depending on your captcha settings. For example, a user only using GSA Captcha Breaker will receive the lowest number of verified targets, a user who supplements this with an OCR captcha service will receive additional verified targets and a user who supplements even further using a human solved captcha service will receive the most verified targets.
On the flip side, with increased verified targets comes increased cost. For the practical side of this tutorial, I will only be using GSA Captcha Breaker for the process as it is the most realistic option for people new to the GSA toolset and it is the cheapest overall.
Footprint Scraping Vs Link Extraction!
As I touched on earlier, both methods of link acquisition have their advantages and disadvantages. It is important you are aware of these when choosing your main method for link acquisition as one may offer more advantages for your specific goal.
Footprint Scraping Pros
- Laser target the exact content management systems you want to scrape.
- Target specific link types such as prioritizing contextual platforms.
- Keywords can be merged with the footprints to find niche relevant pages.
- Relatively low hardware requirement even on higher thread counts.
- Acquired links should have a better identification rate.
- Develop custom footprints over time to further optimize the process.
Footprint Scraping Cons
- Search engines will soft ban your premium proxies unless you pay attention to your timeouts.
- Google are becoming stricter with modified searches such as inurl: lowering link yield.
- Public proxies burn out very fast.
- Footprints can be a waste of time unless you purge your useless footprints.
- Not an exponential process.
Link Extraction Pros
- The larger your list the more targets you can extract from resulting in a higher link yield.
- Does not require proxies.
- The domain for extraction update by the minute so you can constantly extract and grow.
- Large chunks of premium lists can be acquired totally free.
- Can grow a list extremely quickly.
Link Extraction Cons
- Can be a massive resource hog.
- Picks up a large number of domains that can’t be identified.
- No ability to laster target specific content management systems or link types.
That concludes my introduction to building your own auto accept list for internet marketing tools. I hope this has helped some readers understand the processes involved and gave them an insight into what type of link acquisition is better for them.