advertools¶
Online marketing productivity and analysis tools¶
Crawl websites, Generate keywords for SEM campaigns, create text ads on a large scale, analyze multiple SERPs at once, gain insights from large social media posts, and get productive as an online marketer.
If these are things you are interested in, then this package might make your life a little easier.
To install advertools, run the following from the command line:
pip install advertools
# OR:
pip3 install advertools
SEM
SEO
- robots.txt
- XML Sitemaps
- SEO Spider / Crawler
- Crawl Strategies
- How to crawl a list of pages, and those pages only (list mode)?
- How can I crawl a website including its sub-domains?
- How can I save a copy of the logs of my crawl for auditing them later?
- How can I automatically stop my crawl based on a certain condition?
- How can I (dis)obey robots.txt rules?
- How do I set my User-agent while crawling?
- How can I control the number of concurrent requests while crawling?
- How can I slow down the crawling so I don’t hit the websites’ servers too hard?
- How can I set multiple settings to the same crawl job?
- I want to crawl a list of pages, follow links from those pages, but only to a certain specified depth
- How do I pause/resume crawling, while making sure I don’t crawl the same page twice?
- Analyze Search Engine Results (SERPs)
- Google's Knowledge Graph
Text & Content Analysis
Social Media
Indices and tables¶
Index & Change Log
- Index & Change Log
- advertools package
- Subpackages
- advertools.code_recipes package
- Submodules
- 🕷 SEO Crawling & Scraping: Strategies & Recipes
- How to crawl a list of pages, and those pages only (list mode)?
- How can I crawl a website including its sub-domains?
- How can I save a copy of the logs of my crawl for auditing them later?
- How can I automatically stop my crawl based on a certain condition?
- How can I (dis)obey robots.txt rules?
- How do I set my User-agent while crawling?
- How can I control the number of concurrent requests while crawling?
- How can I slow down the crawling so I don’t hit the websites’ servers too hard?
- How can I set multiple settings to the same crawl job?
- I want to crawl a list of pages, follow links from those pages, but only to a certain specified depth
- How do I pause/resume crawling, while making sure I don’t crawl the same page twice?
- 🕷 SEO Crawling & Scraping: Strategies & Recipes
- Module contents
- Submodules
- advertools.code_recipes package
- Submodules
- Create Ads on a Large Scale
- Create Ads Using Long Descriptive Text (top-down approach)
- Emoji: Extract, Analyze, and Get Insights
- Extract structured entities from text lists
- Import and Analyze Knowledge Graph Results on a Large Scale
- Generate Keywords for SEM Campaigns
- Regular Expressions for Extracting Structured Entities
- 🤖 robots.txt Tester for Large Scale Testing
- Import Search Engine Results Pages (SERPs) for Google and YouTube
- Download, Parse, and Analyze XML Sitemaps
- 🕷 Python SEO Crawler / Spider
- Stopwords in Several Languages
- Twitter Data API
- URL Builders
- Split, Parse, and Analyze URLs
- Text Analysis
- Tokenize Words (N-grams)
- YouTube Data API
- Module contents
- Subpackages
- Change Log - advertools
- Unreleased
- 0.10.7 (2020-09-18)
- 0.10.6 (2020-06-30)
- 0.10.5 (2020-06-14)
- 0.10.4 (2020-06-07)
- 0.10.3 (2020-06-03)
- 0.10.2 (2020-05-25)
- 0.10.1 (2020-05-23)
- 0.10.0 (2020-05-21)
- 0.9.1 (2020-05-19)
- 0.9.0 (2020-04-03)
- 0.8.1 (2020-02-08)
- 0.8.0 (2020-02-02)
- 0.7.3 (2019-04-17)
- 0.7.2 (2019-03-29)
- 0.7.1 (2019-03-26)
- 0.7.0 (2019-03-26)
- 0.6.0 (2019-02-11)
- 0.5.3 (2019-01-31)
- 0.5.2 (2018-12-01)
- 0.5.1 (2018-11-06)
- 0.5.0 (2018-11-04)
- 0.4.1 (2018-10-13)
- 0.4.0 (2018-10-08)
- 0.3.0 (2018-08-14)
- 0.2.0 (2018-07-06)
- 0.1.0 (2018-07-02)
- advertools package