This reference page for the Measuring Broadband America - 2016 Report provides the validated data set that was used to produce this Report. It includes the detailed results for 13 tests that formed the basis for the Report, in addition to the results of a separate quality metric run to assess which reporting units were available. The validation approaches used for the September 2015 data set are discussed in more detail in the Report, the Technical Appendix, and the webpage Open Methodology. Note that the materials below are applicable to data collected in September 2015 for the 2016 Report.
- Statistical Averages
- Data Processing Flow
- Data Dictionary
- Data Cleansing
- SQL Cleanup Scripts
- SQL Processing Scripts
- SPSS Processing Scripts
- Unit Profile
- Excluded Units
- Unit Census Block
- Validated September Data
Files
Statistical Averages
Description
Excel workbook of tabulated test results data. Contains the tabular test data from the validated September-October 2015 test results data. A full range of statistical measures for each test is given.
Why We Provide It
This is the comprehensive analysis of all tests conducted within this study comprising test results contained within the report and additional tests described but not presented in the report. Researchers can use this data to do further analysis of broadband performance in the United States.
Additional Information
See the section on methodology, exclusions and scripts for more information on how the statistical analysis was performed and how the data set was processed.
Download
statistical-averages-2015 v20160804.xlsx
Data Processing Flow
Description
This document provides an overview of how data is processed, from its raw form to the resulting report.
Why We Provide It
This file is provided as documentation to aid researchers and other parties interested in replicating or analyzing the data processing flow.
Download
data-processing-flow-sept2015.docx
Data Dictionary
Description
This document provides a brief explanation of each field included in the data set, and is provided as documentation on the structure of the data.
Why We Provide It
This file is provided as documentation on the structure of the data. This one file describes the contents on this page.
Download
Data Cleansing
Description
This document outlines the data cleansing processes used to generate the "clean" data set from the "raw" data set.
Why We Provide It
This file is provided as documentation of how the raw data was cleansed to be included in the validated data set.
Download
validated-data-cleansing-sept2015.docx
SQL Cleanup Scripts
Description
SQL scripts associated with the data cleansing process used to prepare raw September 2015 data for statistical analysis.
Why We Provide It
These scripts are provided for full transparency and to facilitate researchers in examining the data. The SQL scripts are necessary to prepare the dataset for SQL processing operations.
Download
sql-cleanup-scripts-sept2015.tar.gz
SQL Processing Scripts
Description
SQL scripts used to prepare the per-unit averages used for statistical processing of the September 2015 data.
Why We Provide It
These scripts are provided for full transparency and to facilitate researchers in examining the data. The SQL scripts process data to to prepare for use in statistical processing.
Download
sql-processing-scripts-sept2015.sql.gz
SPSS Processing Scripts
Description
This contains the SPSS scripts used to merge and clean the data used in producing the averages and data tables for the charts in the Measure Broadband America 2016 report. The SPSS processing works on the per-unit CSV data and removes data with low sample sizes and computes statistical averages for the remainder that can be used in the report.
Why We Provide It
These scripts are provided for full transparency and to facilitate researchers in examining the data.
Download
spss-processing-scripts-sept2015.zip
Unit Profile
Description
This document identifies the various details of each test unit, including ISP, technology, service tier, and general location. Each unit represents one volunteer panelist. The unit ID's were random, which served to protect the anonymity of the volunteer panelists. Technical note: This is a large, normalized data set which expands to multiple files. To use this data you will most likely need to import it into a relational databases.
Why We Provide It
This data is provided as reference look up information for the individual units running tests.
Download
Excluded Units
Description
A listing of units excluded from the statistical analysis with brief notations as to why.
Why We Provide It
This list is provided for transparency and identifies those units that were excluded from the validated September-October results used for the statistical analysis. Test results from the excluded units are contained in the Raw Bulk Data files.
Download
Unit Census Block
Description
This spreadsheet identifies the census block in which each unit running test is located. Census block is from 2000 census and is in the FIPS code format. We have used block FIPS codes for blocks that contains more than 1,000 people. For blocks with less than 1,000 people we have aggregated to the next highest level, i.e. tract and used the Tract FIPS code, provided there are more than 1,000 people in the tract. In cases where there are less than 1,000 people in a tract we have aggregated to Regional level
Why We Provide It
To identify the general geographic location of the unit running the test.
Download
UnitID-census-block-sept2015.xlsx
Validated September Data
Description
This document contains the validated September-October 2015 data set that was used to produce the Report. It includes the detailed results for all 13 tests that were run as part of this study, in addition to the results of a separate quality metric run to assess which reporting units were available. The September-October 2015 data set was validated as discussed in the Report and Technical Appendix; further details can be found in the Methodology section of this page. Technical note: This is a large, normalized data set which expands to multiple files. To use this data you will most likely need to import it into a relational database.
Why We Provide It
The validated data is provided for full transparency and to enable researchers to conduct their own analysis from a common pool of data.