# DATA SCIENCE CERTIFICATION

## ABOUT THE COURSE:

The field of study known as data science works with enormous amounts of data using cutting-edge tools and methods to uncover hidden patterns, glean valuable information, and make business decisions. Data science creates predictive models using sophisticated machine learning algorithms.

The information used for analysis can be given in a variety of formats and come from a wide range of sources.

## DESCRIPTION:

A collection of information is called data. One goal of data science is to organise data in a way that makes it understandable and practical to use.

Programming language Python is one that Data Scientists frequently utilise. Python comes with built-in mathematical libraries and functions that make solving mathematical problems and conducting data analysis simpler.

To develop predictions and analyse them, a data scientist needs to be familiar with mathematical functions.

The art and science of data analysis is statistics.

A basic understanding of Python, mathematical operations, and statistics is demonstrated by the Data Science Developer Certificate.

## WHAT WILL I LEARN?

The program consists of 9 online courses that will provide you with the latest job-ready tools and skills, including open source tools and libraries, Python, databases, SQL, data visualization, data analysis, statistical analysis, predictive modelling, and machine learning algorithms.

## TOP HIRING COMPANIES:HCL Technologies

- IBM
- WIPRO
- SLACK
- DATA ZYMES

## SALARY:

The average data scientist salary is $100,560, according to the U.S. Bureau of Labor Statistics.

## DATA SCIENCE CERTIFICATION:

In this Data Science Training, you will be acquiring knowledge on how to analyze data, and make better use of it in your industry.

The demand for a Data Scientist will always continue to grow. Not just that, but a certification will also help you in increasing your earning potential.

The flow of this certification model is:

- Data Science introduction and importance
- Data acquisition and Data Science Lifecycle
- Experimentation, Evaluation, and Project Deployment tools
- Different Algorithms used in Machine Learning
- Predictive analysis and segmentation using clustering
- Big Data fundamentals and Hadoop integration with R
- Data Scientist roles and responsibilities
- Deploying recommender systems on real-world data sets
- Work on data mining, data structures, and data manipulation

## QUESTION AND ANSWERS:

**What is Data Science?**

Data Science is a combination of algorithms, tools, and machine learning technique which helps you to find common hidden patterns from the given raw data.

**What is logistic regression in Data Science?**

Logistic Regression is also called as the logic model. It is a method to forecast the binary outcome from a linear combination of predictor variables.

**Name three types of biases that can occur during sampling**

In the sampling process, there are three types of biases, which are:

- Selection bias
- Under coverage bias
- Survivorship bias

**Discuss Decision Tree algorithm**

A decision tree is a popular supervised machine learning algorithm. It is mainly used for Regression and Classification. It allows breaks down a dataset into smaller subsets. The decision tree can able to handle both categorical and numerical data.

**What is Prior probability and likelihood?**

Prior probability is the proportion of the dependent variable in the data set while the likelihood is the probability of classifying a given observant in the presence of some other variable.

**Explain Recommender Systems?**

It is a subclass of information filtering techniques. It helps you to predict the preferences or ratings which users likely to give to a product.

**Name three disadvantages of using a linear model**

Three disadvantages of the linear model are:

- The assumption of linearity of the errors.
- You can’t use this model for binary or count outcomes
- There are plenty of overfitting problems that it can’t solve

**Why do you need to perform resampling?**

Resampling is done in below-given cases:

Estimating the accuracy of sample statistics by drawing randomly with replacement from a set of the data point or using as subsets of accessible data

Substituting labels on data points when performing necessary tests

Validating models by using random subsets

**List out the libraries in Python used for Data Analysis and Scientific Computations.**

- SciPy
- Pandas
- Matplotlib
- NumPy
- SciKit
- Seaborn

**What is Power Analysis?**

The power analysis is an integral part of the experimental design. It helps you to determine the sample size requires to find out the effect of a given size from a cause with a specific level of assurance. It also allows you to deploy a particular probability in a sample size constraint.

**Explain Collaborative filtering**

Collaborative filtering used to search for correct patterns by collaborating viewpoints, multiple data sources, and various agents.

**What is bias?**

Bias is an error introduced in your model because of the oversimplification of a machine learning algorithm.” It can lead to underfitting.

**Discuss ‘Naive’ in a Naive Bayes algorithm?**

The Naive Bayes Algorithm model is based on the Bayes Theorem. It describes the probability of an event. It is based on prior knowledge of conditions which might be related to that specific event.

**What is a Linear Regression?**

Linear regression is a statistical programming method where the score of a variable ‘A’ is predicted from the score of a second variable ‘B’. B is referred to as the predictor variable and A as the criterion variable.

**State the difference between the expected value and mean value**

They are not many differences, but both of these terms are used in different contexts. Mean value is generally referred to when you are discussing a probability distribution whereas expected value is referred to in the context of a random variable.

** What the aim of conducting A/B Testing?**

AB testing used to conduct random experiments with two variables, A and B. The goal of this testing method is to find out changes to a web page to maximize or increase the outcome of a strategy.

**What is Ensemble Learning?**

The ensemble is a method of combining a diverse set of learners together to improvise on the stability and predictive power of the model. Two types of Ensemble learning methods are:

Bagging

Bagging method helps you to implement similar learners on small sample populations. It helps you to make nearer predictions.

Boosting

Boosting is an iterative method which allows you to adjust the weight of an observation depends upon the last classification. Boosting decreases the bias error and helps you to build strong predictive models.

**Explain Eigenvalue and Eigenvector**

Eigenvectors are for understanding linear transformations. Data scientist need to calculate the eigenvectors for a covariance matrix or correlation. Eigenvalues are the directions along using specific linear transformation acts by compressing, flipping, or stretching.

**Define the term cross-validation**

Cross-validation is a validation technique for evaluating how the outcomes of statistical analysis will generalize for an Independent dataset. This method is used in backgrounds where the objective is forecast, and one needs to estimate how accurately a model will accomplish.

**Explain the steps for a Data analytics project**

The following are important steps involved in an analytics project:

- Understand the Business problem
- Explore the data and study it carefully.
- Prepare the data for modelling by finding missing values and transforming variables.
- Start running the model and analyze the Big data result.
- Validate the model with new data set.
- Implement the model and track the result to analyze the performance of the model for a specific period.

for more such content