- advertools package
- Create Ads on a Large Scale
- Create Ads Using Long Descriptive Text (top-down approach)
- advertools.cli module
- Emoji: Extract, Analyze, and Get Insights
- Extract structured entities from text lists
- 🕷 Python Status Code Checker with Response Headers
- Import and Analyze Knowledge Graph Results on a Large Scale
- Generate Keywords for SEM Campaigns
- Log File Analysis
- Parse and Analyze Crawl Logs in a Dataframe
- Regular Expressions for Extracting Structured Entities
- Reverse DNS Lookup in Bulk
- 🤖 Analyze and Test robots.txt Files on a Large Scale
- Import Search Engine Results Pages (SERPs) for Google and YouTube
- Download, Parse, and Analyze XML Sitemaps
- 🕷 Python SEO Crawler / Spider
- Stopwords in Several Languages
- Twitter Data API
- URL Builders
- Split, Parse, and Analyze URL Structure
- Text Analysis
- Tokenize Words (N-grams)
- YouTube Data API
- Module contents
Change Log - advertools
Initial experimental functionality for
Enable autothrottling by default for
Fixed - Make img attributes consistent in length, and support all attributes.
Allow optional trailing space in log files (contributed by @andypayne)
Replace newlines with spaces while parsing JSON-LD which was causing errors in some cases.
Crawling recipe for how to use the
DEFAULT_REQUEST_HEADERSto change the default headers.
Split long lists of URL while crawling regardless of the
Clarify that while authenticating for Twitter only
app_secretare required, with the option to provide
Command line interface with most functions
Make documentation interactive for most pages using
Use np.nan wherever there are missing values in
Don't remove double quotes from etags when downloading XML sitemaps
Replace instances of
pd.concat, which is depracated.
Replace empty values with np.nan for the size column in
crawl_headers: A crawler that only makes HEAD requests to a known list of URLs.
reverse_dns_lookup: A way to get host information for a large list of IP addresses concurrently.
New options for crawling: exclude_url_params, include_url_params, exclude_url_regex, and include_url_regex for controlling which links to follow while crawling.
custom_settingsoptions given to the
crawlfunction that were defined using a dictionary can now be set without issues. There was an issue if those options were not strings.
The skip_url_params option was removed and replaced with the more versatile
exclude_url_params, which accepts either
Trueor a list of URL parameters to exclude while following links.
Crawler stops when provided with bad URLs in list mode.
logs_to_df: Convert a log file of any non-JSON format into a pandas DataFrame and save it to a parquet file. This also compresses the file to a much smaller size.
Crawler extracts all available
imgattributes: 'alt', 'crossorigin', 'height', 'ismap', 'loading', 'longdesc', 'referrerpolicy', 'sizes', 'src', 'srcset', 'usemap', and 'width' (excluding global HTML attributes like
New parameter for the
skip_url_params: Defaults to False, consistent with previous behavior, with the ability to not follow/crawl links containing any URL parameters.
New column for
url_to_df"last_dir": Extract the value in the last directory for each of the URLs.
Query parameter columns in
url_to_dfDataFrame are now sorted by how full the columns are (the percentage of values that are not NA)
The nofollow attribute for nav, header, and footer links.
Timeout error while downloading robots.txt files.
Make extracting nav, header, and footer links consistent with all links.
New parameter recursive for
sitemap_to_dfto control whether or not to get all sub sitemaps (default), or to only get the current (sitemapindex) one.
New columns for
sitemap_size_mb(1 MB = 1,024x1,024 bytes), and
Option to request multiple robots.txt files with
Option to save downloaded robots DataFrame(s) to a file with
robotstxt_to_dfusing the new parameter
Two new columns for
Raise ValueError in
xpath_selectorscontain any of the default crawl column headers
New XPath code recipes for custom extraction.
crawllogs_to_dfwhich converts crawl logs to a DataFrame provided they were saved while using the
New columns in
crawl: viewport, charset, all h headings (whichever is available), nav, header and footer links and text, if available.
Crawl errors don't stop crawling anymore, and the error message is included in the output file under a new errors and/or jsonld_errors column(s).
In case of having JSON-LD errors, errors are reported in their respective column, and the remainder of the page is scraped.
Removed column prefix resp_meta_ from columns containing it
Redirect URLs and reasons are separated by '@@' for consistency with other multiple-value columns
Links extracted while crawling are not unique any more (all links are extracted).
Emoji data updated with v13.1.
Heading tags are scraped even if they are empty, e.g. <h2></h2>.
Default user agent for crawling is now advertools/VERSION.
Handle sitemap index files that contain links to themselves, with an error message included in the final DataFrame
Error in robots.txt files caused by comments preceded by whitespace
Zipped robots.txt files causing a parsing issue
Crawl issues on some Linux systems when providing a long list of URLs
Columns from the
crawloutput: url_redirected_to, links_fragment
knowledge_graphfor querying Google's API
New parameter max_workers for
sitemap_to_dfto determine how fast it could go
New parameter capitalize_adgroups for
kw_generateto determine whether or not to keep ad groups as is, or set them to title case (the default)
Remove restrictions on the number of URLs provided to
crawl, assuming follow_links is set to False (list mode)
JSON-LD issue breaking crawls when it's invalid (now skipped)
youtube.guide_categories_list(no longer supported by the API)
JSON-LD support in crawling. If available on a page, JSON-LD items will have special columns, and multiple JSON-LD snippets will be numbered for easy filtering
Stricter parsing for rel attributes, making sure they are in link elements as well
Date column names for
sitemap_to_dfunified as "download_date"
Numbering OG, Twitter, and JSON-LD where multiple elements are present in the same page, follows a unified approach: no numbering for the first element, and numbers start with "1" from the second element on. "element", "element_1", "element_2" etc.
- New features for the
Extract canonical tags if available
Extract alternate href and hreflang tags if available
Open Graph data "og:title", "og:type", "og:image", etc.
Twitter cards data "twitter:site", "twitter:title", etc.
- New features for the
- Minor fixes to
Allow whitespace in fields
Allow case-insensitive fields
- Minor fixes to
crawlnow only supports output_file with the extension ".jl"
word_frequencydrops wtd_freq and rel_value columns if num_list is not provided
url_to_df, splitting URLs into their components and to a DataFrame
Slight speed up for
robotstxt_test, testing URLs and whether they can be fetched by certain user-agents
Documentation main page relayout, grouping of topics, & sidebar captions
Various documentation clarifications and new tests
User-Agent info to requests getting sitemaps and robotstxt files
CSS/XPath selectors support for the crawl function
Support for custom spider settings with a new parameter
Update changed supported search operators and values for CSE
Links are better handled, and new output columns are available:
body_textextraction is improved by containing <p>, <li>, and <span> elements
crawlfor crawling and parsing websites
robotstxt_to_dfdownloading robots.txt files into DataFrames
Ability to specify robots.txt file for
Ability to retreive any kind of sitemap (news, video, or images)
Errors column to the returnd DataFrame if any errors occur
sitemap_downloadedcolumn showing datetime of getting the sitemap
Logging issue causing
sitemap_to_dfto log the same action twice
Issue preventing URLs not ending with xml or gz from being retreived
Correct sitemap URL showing in the
sitemap_to_dfimports an XML sitemap into a
Column query_time is now named queryTime in the youtube functions
Handle json_normalize import from pandas based on pandas version
New module youtube connecting to all GET requests in API
extract_numbers new function
emoji_search new function
emoji_df new variable containing all emoji as a DataFrame
Emoji database updated to v13.0
serp_goog with expanded pagemap and metadata
serp_goog errors, some parameters not appearing in result df
extract_numbers issue when providing dash as a separator in the middle
New function extract_exclamations very similar to extract_questions
New function extract_urls, also counts top domains and top TLDs
New keys to extract_emoji; top_emoji_categories & top_emoji_sub_categories
Groups and sub-groups to emoji db
Emoji regex updated
Simpler extraction of Spanish questions
Missing __init__ imports.
New extract_ functions:
Generic extract used by all others, and takes arbitrary regex to extract text.
extract_questions to get question mark statistics, as well as the text of questions asked.
extract_currency shows text that has currency symbols in it, as well as surrounding text.
extract_intense_words gets statistics about, and extract words with any character repeated three or more times, indicating an intense feeling (+ve or -ve).
New function word_tokenize:
Used by word_frequency to get tokens of 1,2,3-word phrases (or more).
Split a list of text into tokens of a specified number of words each.
New stop-words from the
current: Arabic, Azerbaijani, Danish, Dutch, English, Finnish, French, German, Greek, Hungarian, Italian, Kazakh, Nepali, Norwegian, Portuguese, Romanian, Russian, Spanish, Swedish, Turkish.
new: Bengali, Catalan, Chinese, Croatian, Hebrew, Hindi, Indonesian, Irish, Japanese, Persian, Polish, Sinhala, Tagalog, Tamil, Tatar, Telugu, Thai, Ukrainian, Urdu, Vietnamese
- word_frequency takes new parameters:
regex defaults to words, but can be changed to anything 'S+' to split words and keep punctuation for example.
sep not longer used as an option, the above regex can be used instead
num_list now optional, and defaults to counts of 1 each if not provided. Useful for counting abs_freq only if data not available.
phrase_len the number of words in each split token. Defaults to 1 and can be set to 2 or higher. This helps in analyzing phrases as opposed to words.
Parameters supplied to serp_goog appear at the beginning of the result df
serp_youtube now contains nextPageToken to make paginating requests easier
- New function
extract_words to extract an arbitrary set of words
- Minor updates
ad_from_string slots argument reflects new text ad lenghts
hashtag regex improved
- Fix minor bugs
Handle Twitter search queries with 0 results in final request
- Fix minor bugs
Properly handle requests for >50 items (serp_youtube)
Rewrite test for _dict_product
Fix issue with string printing error msg
- Fix minor bugs
_dict_product implemented with lists
Missing keys in some YouTube responses
- New function serp_youtube
Query YouTube API for videos, channels, or playlists
Multiple queries (product of parameters) in one function call
Reponse looping and merging handled, one DataFrame
serp_goog return Google's original error messages
twitter responses with entities, get the entities extracted, each in a separate column
- New function serp_goog (based on Google CSE)
Query Google search and get the result in a DataFrame
Make multiple queries / requests in one function call
All responses merged in one DataFrame
twitter.get_place_trends results are ranked by town and country
- New Twitter module based on twython
Wraps 20+ functions for getting Twitter API data
Gets data in a pands DataFrame
Handles looping over requests higher than the defaults
Tested on Python 3.7
Search engine marketing cheat sheet.
- New set of extract_ functions with summary stats for each:
Tests and bug fixes
New set of kw_<match-type> functions.
Full testing and coverage.
First release on PyPI.
- Functions available:
ad_create: create a text ad place words in placeholders
- ad_from_string: split a long string to shorter string that fit into
kw_generate: generate keywords from lists of products and words
url_utm_ga: generate a UTM-tagged URL for Google Analytics tracking
- word_frequency: measure the absolute and weighted frequency of words in
collection of documents