Data Science Leave a comment

Data science is an inter-disciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from many structural and unstructured data. Data science is related to data mining, deep learning and big data.

Data science is a “concept to unify statistics, data analysis, machine learning, domain knowledge and their related methods” in order to “understand and analyze actual phenomena” with data. It uses techniques and theories drawn from many fields within the context of mathematics, statistics, computer science, domain knowledge and information science. Turing award winner Jim Gray imagined data science as a “fourth paradigm” of science (empirical, theoretical, computational and now data-driven) and asserted that “everything about science is changing because of the impact of information technology” and the data deluge.

Data science is an interdisciplinary field focused on extracting knowledge from data sets, which are typically large (see big data). The field encompasses analysis, preparing data for analysis, and presenting findings to inform high-level decisions in an organization. As such, it incorporates skills from computer science, mathematics, statistics, information visualization, graphic design, and business. Statistician Nathan Yau, drawing on Ben Fry, also links data science to human-computer interaction: users should be able to intuitively control and explore data. In 2015, the American Statistical Association identified database management, statistics and machine learning, and distributed and parallel systems as the three emerging foundational professional communities.

Many statisticians, including Nate Silver, have argued that data science is not a new field, but rather another name for statistics. Others argue that data science is distinct from statistics because it focuses on problems and techniques unique to digital data. Vasant Dhar writes that statistics emphasizes quantitative data and description. In contrast, data science deals with quantitative and qualitative data (e.g. images) and emphasizes prediction and action. Andrew Gelman of Columbia University and data scientist Vincent Granville have described statistics as a nonessential part of data science. Stanford professor David Donoho writes that data science is not distinguished from statistics by the size of datasets or use of computing, and that many graduate programs misleadingly advertise their analytics and statistics training as the essence of a data science program. He describes data science as an applied field growing out of traditional statistics. In summary, data science can be therefore described as an applied branch of statistics.

In 1962, John Tukey described a field he called “data analysis,” which resembles modern data science. Later, attendees at a 1992 statistics symposium at the University of Montpellier II acknowledged the emergence of a new discipline focused on data of various origins and forms, combining established concepts and principles of statistics and data analysis with computing.

The term “data science” has been traced back to 1974, when Peter Naur proposed it as an alternative name for computer science. In 1996, the International Federation of Classification Societies became the first conference to specifically feature data science as a topic. However, the definition was still in flux. In 1997, C.F. Jeff Wu suggested that statistics should be renamed data science. He reasoned that a new name would help statistics shed inaccurate stereotypes, such as being synonymous with accounting, or limited to describing data. In 1998, Chikio Hayashi argued for data science as a new, interdisciplinary concept, with three aspects: data design, collection, and analysis.

During the 1990s, popular terms for the process of finding patterns in datasets (which were increasingly large) included “knowledge discovery” and “data mining.”

The modern conception of data science as an independent discipline is sometimes attributed to William S. Cleveland. In a 2001 paper, he advocated an expansion of statistics beyond theory into technical areas; because this would significantly change the field, it warranted a new name. “Data science” became more widely used in the next few years: in 2002, the Committee on Data for Science and Technology launched Data Science Journal. In 2003, Columbia University launched The Journal of Data Science. In 2014, the American Statistical Association’s Section on Statistical Learning and Data Mining changed its name to the Section on Statistical Learning and Data Science, reflecting the ascendant popularity of data science.

The professional title of “data scientist” has been attributed to DJ Patil and Jeff Hammerbacher in 2008. Though it was used by the National Science Board in their 2005 report, “Long-Lived Digital Data Collections: Enabling Research and Education in the 21st Century,” it referred broadly to any key role in managing a digital data collection.

There is still no consensus on the definition of data science and it is considered by some to be a buzzword.

Data science is a growing field. A career as a data scientist is ranked at the third best job in America for 2020 by Glassdoor, and was ranked the number one best job from 2016-2019.

In order to become a data scientist, there is a significant amount of education and experience required. The first step in becoming a data scientist is to earn a bachelor’s degree, typically in a field related to computing or mathematics. Coding bootcamps are also available and can be used as an alternate pre-qualification to supplement a bachelor’s degree in another field. Most data scientists also complete a master’s degree or a PhD in data science. Once these qualifications are met, the next step to becoming a data scientist is to apply for an entry-level job in the field. Some data scientists may later choose to specialize in a sub-field of data science.

Specializations and associated careers
1) Machine Learning Scientist: Machine learning scientists research new methods of data analysis and create algorithms.
2) Data Analyst: Data analysts utilize large data sets to gather information that meets their company’s needs.
3) Data Consultant: Data consultants work with businesses to determine the best usage of the information yielded from data analysis.
4) Data Architect: Data architects build data solutions that are optimized for performance and design applications.
5) Applications Architect: Applications architects track how applications are used throughout a business and how they interact with users and other applications.

Big data is very quickly becoming a vital tool for businesses and companies of all sizes. The availability and interpretation of big data has altered the business models of old industries and enabled the creation of new ones. Data-driven businesses are worth $1.2 trillion collectively in 2020, an increase from $333 billion in the year 2015. Data scientists are responsible for breaking down big data into usable information and creating software and algorithms that help companies and organizations determine optimal operations. As big data continues to have a major impact on the world, data science does as well due to the close relationship between the two.

There are a variety of different technologies and techniques that are used for data science which depend on the application. More recently, full-featured, end-to-end platforms have been developed and heavily used for data science and machine learning.

Linear Regression
Logistic Regression
Support Vector Machine (SVM)
Clustering is a technique used to group data together.
Dimensionality reduction is used to reduce the complexity of data computation so that it can be performed more quickly.
Machine learning is a technique used to perform tasks by inferencing patterns from data.

Python is a programming language with simple syntax that is commonly used for data science. There are a number of python libraries that are used in data science including numpy, pandas, and scipy.
R is a programming language that was designed for statisticians and data mining and is optimized for computation.
Julia is a high-level, high-performance, dynamic programming language well-suited for numerical analysis and computational science.

TensorFlow is a framework for creating machine learning models developed by Google.
Pytorch is another framework for machine learning developed by Facebook.
Jupyter Notebook is an interactive web interface for Python that allows faster experimentation.
Apache Hadoop is a software framework that is used to process data over large distributed systems.

Visualization Tools
Plotly provides a rich set of interactive scientific graphing libraries.
Tableau makes a variety of software that is used for data visualization.
PowerBI is a business analytics service by Microsoft.

RapidMiner is a data science software platform developed by the company of the same name.
Dataiku is a collaborative data science software marketed for big data.
Anaconda provides a comprehensive free and open-source distribution of the Python and R programming languages.
MATLAB is a computing environment heavily used in industry and academia.

Leave a Reply

%d bloggers like this: