Hello, I need help with these SQL problems for Microsoft Access. I attached 5 fi

Hello, I need help with these SQL problems for Microsoft Access. I attached 5 fi

Hello, I need help with these SQL problems for Microsoft Access. I attached 5 files: one with the instructions, another one with the Microsoft Access file that is needed, another one with the problems, and last two with the solutions to check your work!

Cleveland State University wants to design a database to organize their student

Cleveland State University wants to design a database to organize their student

Cleveland State University wants to design a database to organize their student membership in clubs on campus and maintain a record of their meetings. Cleveland State University Student Life maintains data about the following entities:
A. Club, including club name, description, budget, club president, and faculty advisor.
B. Student, including student ID, student name, email address, and major.
C. Faculty Advisor, including faculty ID, faculty name, phone number, email address, and department.
D. Meeting, including date, time, location, meeting length, and agenda Each club must have one faculty member who advises it, and the date when the faculty advisor first advised the club must be appropriately modeled. We are not interested in keeping the meeting information for a club if that club doesn’t exist anymore.
Construct the E-R diagram for Hood Student Life. Document all relationships (both directions for each relationship) and any assumptions that you make. Make sure that your model properly depicts entities, attributes, constraints, and relationships and weak entities (if any).
Note: It is very important that you stay within the bounds of the problem as stated above. For example, the description above refers to clubs that exist on university ’s campus. It does not mention or imply anything about athletic teams. Therefore, don’t even attempt to create a model that accommodates those items.
It is also very important that you do not add attributes if they are not necessary or required by the problem. For example, the description above refers to students. It does not mention or imply anything about student status (i.e. freshman, sophomore etc). Therefore, don’t even attempt to create a model that includes student status as an attribute; it would be superfluous.
These are just examples of how you can find yourself out of bounds. Developing a general, all-inclusive model to solve the problem is as important as creating a well bounded model to solve the problem.
USE draw.io for draw

Q1) Write a SQL query to fetch all the duplicate records from applicants table.

Q1) Write a SQL query to fetch all the duplicate records from applicants table.

Q1) Write a SQL query to fetch all the duplicate records from applicants table.
/**Tables Structure:**/
drop table applicants;
create table applicants
(
user_id int primary key,
user_name varchar(30) not null,
email varchar(50));
REPLACE INTO users values
(1, ‘pearson’, ‘pearson@gmail.com’),
(2, ‘Reshma’, ‘reshma@gmail.com’),
(3, ‘Farhana’, ‘farhana@gmail.com’),
(4, ‘Robin’, ‘robin@gmail.com’),
(5, ‘Robin’, ‘robin@gmail.com’);
select * from applicants;
Q2) Create a SQL query to retrieve the employee table’s second-to-last record.
–Tables Structure:
drop table employee;
create table employee
( emp_ID int primary key
, emp_NAME varchar(50) not null
, DEPT_NAME varchar(50)
, SALARY int);
REPLACE INTO employee values(101, ‘Mohan’, ‘Admin’, 4000);
REPLACE INTO employee values(102, ‘Rajkumar’, ‘HR’, 3000);
REPLACE INTO employee values(103, ‘Akbar’, ‘IT’, 4000);
REPLACE INTO employee values(104, ‘Dorvin’, ‘Finance’, 6500);
REPLACE INTO employee values(105, ‘Rohit’, ‘HR’, 3000);
REPLACE INTO employee values(106, ‘Rajesh’,’Finance’, 5000);
REPLACE INTO employee values(107, ‘Preet’, ‘HR’, 7000);
REPLACE INTO employee values(108, ‘Maryam’, ‘Admin’, 4000);
REPLACE INTO employee values(109, ‘Sanjay’, ‘IT’, 6500);
REPLACE INTO employee values(110, ‘Vasudha’, ‘IT’, 7000);
REPLACE INTO employee values(111, ‘Melinda’, ‘IT’, 8000);
REPLACE INTO employee values(112, ‘Komal’, ‘IT’, 10000);
REPLACE INTO employee values(113, ‘Gautham’, ‘Admin’, 2000);
REPLACE INTO employee values(114, ‘Manisha’, ‘HR’, 3000);
REPLACE INTO employee values(115, ‘Chandni’, ‘IT’, 4500);
REPLACE INTO employee values(116, ‘Satya’, ‘Finance’, 6500);
REPLACE INTO employee values(117, ‘Adarsh’, ‘HR’, 3500);
REPLACE INTO employee values(118, ‘Tejaswi’, ‘Finance’, 5500);
REPLACE INTO employee values(119, ‘Cory’, ‘HR’, 8000);
REPLACE INTO employee values(120, ‘Monica’, ‘Admin’, 5000);
REPLACE INTO employee values(121, ‘Rosalin’, ‘IT’, 6000);
REPLACE INTO employee values(122, ‘Ibrahim’, ‘IT’, 8000);
REPLACE INTO employee values(123, ‘Vikram’, ‘IT’, 8000);
REPLACE INTO employee values(124, ‘Dheeraj’, ‘IT’, 11000);
select * from employee;
Required Output: Vikram
Create a SQL query to only show the employee table’s information for those with the only highest or lowest salaries across all departments.
–Tables Structure:
drop table employee;
create table employee
( emp_ID int primary key
, emp_NAME varchar(50) not null
, DEPT_NAME varchar(50)
, SALARY int);
REPLACE INTO employee values(101, ‘Mohan’, ‘Admin’, 4000);
REPLACE INTO employee values(102, ‘Rajkumar’, ‘HR’, 3000);
REPLACE INTO employee values(103, ‘Akbar’, ‘IT’, 4000);
REPLACE INTO employee values(104, ‘Dorvin’, ‘Finance’, 6500);
REPLACE INTO employee values(105, ‘Rohit’, ‘HR’, 3000);
REPLACE INTO employee values(106, ‘Rajesh’,’Finance’, 5000);
REPLACE INTO employee values(107, ‘Preet’, ‘HR’, 7000);
REPLACE INTO employee values(108, ‘Maryam’, ‘Admin’, 4000);
REPLACE INTO employee values(109, ‘Sanjay’, ‘IT’, 6500);
REPLACE INTO employee values(110, ‘Vasudha’, ‘IT’, 7000);
REPLACE INTO employee values(111, ‘Melinda’, ‘IT’, 8000);
REPLACE INTO employee values(112, ‘Komal’, ‘IT’, 10000);
REPLACE INTO employee values(113, ‘Gautham’, ‘Admin’, 2000);
REPLACE INTO employee values(114, ‘Manisha’, ‘HR’, 3000);
REPLACE INTO employee values(115, ‘Chandni’, ‘IT’, 4500);
REPLACE INTO employee values(116, ‘Satya’, ‘Finance’, 6500);
REPLACE INTO employee values(117, ‘Adarsh’, ‘HR’, 3500);
REPLACE INTO employee values(118, ‘Tejaswi’, ‘Finance’, 5500);
REPLACE INTO employee values(119, ‘Cory’, ‘HR’, 8000);
REPLACE INTO employee values(120, ‘Monica’, ‘Admin’, 5000);
REPLACE INTO employee values(121, ‘Rosalin’, ‘IT’, 6000);
REPLACE INTO employee values(122, ‘Ibrahim’, ‘IT’, 8000);
REPLACE INTO employee values(123, ‘Vikram’, ‘IT’, 8000);
REPLACE INTO employee values(124, ‘Dheeraj’, ‘IT’, 11000);
select * from employee;
Ex: Output: In Admit department
# emp_ID,emp_NAME,DEPT_NAME,SALARY ,max_salary,min_salary
113GauthamAdmin200050002000
120Monica Admin500050002000
Create a SQL query from the students table to swap the adjacent student names.
Note: The student name should remain the same if there are no adjacent students.
–Table Structure:
drop table students;
create table students
(
id int primary key,
student_name varchar(50) not null
);
REPLACE INTO students values
(1, ‘James’),
(2, ‘Michael’),
(3, ‘George’),
(4, ‘Stewart’),
(5, ‘Robin’);
select * from students;
Q7) Get all the instances where Alaska experienced extremely low temperatures for three or more straight days from the weather table.
Note: When the weather is below zero, it is deemed to be extremely cold.
–Table Structure:
drop table weather;
create table weather
(
id int,
city varchar(50),
temperature int,
day date
);
delete from weather;
REPLACE INTO weather values
(1, ‘Alaska’, -1, to_date(‘2021-01-01′,’yyyy-mm-dd’)),
(2, ‘Alaska’, -2, to_date(‘2021-01-02′,’yyyy-mm-dd’)),
(3, ‘Alaska’, 4, to_date(‘2021-01-03′,’yyyy-mm-dd’)),
(4, ‘Alaska’, 1, to_date(‘2021-01-04′,’yyyy-mm-dd’)),
(5, ‘Alaska’, -2, to_date(‘2021-01-05′,’yyyy-mm-dd’)),
(6, ‘Alaska’, -5, to_date(‘2021-01-06′,’yyyy-mm-dd’)),
(7, ‘Alaska’, -7, to_date(‘2021-01-07′,’yyyy-mm-dd’)),
(8, ‘Alaska’, 5, to_date(‘2021-01-08′,’yyyy-mm-dd’));
select * from weather;

Download a dataset of size 300 MB or more and then solve the following programmi

Download a dataset of size 300 MB or more and then solve the following programmi

Download a dataset of size 300 MB or more and then solve the following programming questions using
Spark ML library.
a. a classification problem using KNN algorithm.
b. a regression problem using KNN algorithm.
c. a clustering problem using K-means algorithm.
Deliverables
A WORD document which contains the following
o Your solution to the classification problem.
o Your solution to the regression problem.
o Your solution to the clustering problem.
All solutions should have screenshot of code with description of each step.

Using the table of transactions (table attached) generate all the frequent items

Using the table of transactions (table attached) generate all the frequent items

Using the table of transactions (table attached) generate all the frequent itemsets using Apriori algorithm and then generate the interesting association rules. Also show which of the interesting rules are positively correlated. Assume minsup >=
0.3 and minimum confidence >= 0.4

Unit 10 Assignment: Further Exploration of the Hadoop Environment Outcomes addre

Unit 10 Assignment: Further Exploration of the Hadoop Environment
Outcomes addre

Unit 10 Assignment: Further Exploration of the Hadoop Environment
Outcomes addressed in this activity:
Unit Outcomes:
Migrate an unstructured data file from a local file system to the Apache Hadoop Distributed File System (HDFS).
Transform data using Apache Hive’s flexible SerDes (serializers/deserializers) to parse the log data into individual fields using a regular expression.
Perform data analysis using Apache Hive.
Course Outcome:
IT350-6: Explore non-relational database alternatives.
Purpose
In the modern world of big data, unstructured data is the most abundant. It is so prolific because unstructured data could be anything: media, imaging, audio, sensor data, text data, and much more. Unstructured simply means that datasets (typical large collections of files) are not stored in a structured database format. Unstructured data has an internal structure, but it is not predefined through data models. It might be human-generated or machine-generated in a textual or a non-textual format.
You will migrate a log file containing unstructured web clickstream data to the Apache Hadoop Distributed File System (HDFS). You will then transform the unstructured data to individual fields through the use of Apache Hive’s flexible SerDes (serializers/deserializers) functionality. You will complete the lab by performing basic data analysis by querying the migrated and transformed data in Apache Hive. Apache Hive is a data warehouse software project built on top of Apache Hadoop, providing data query and analysis capabilities.Assignment Instructions
Navigate to the Academic Tools area of this course and select Library, then Required Readings to review the Unit 10 videos covering facets associated with Hadoop. It is very important that you watch the Unit 10 videos before completing the assignment.
The assignment work will be performed within Codio’s cloud-based learning environment. Navigate to this course’s main menu and select Codio to access this platform.
Your course instructor will provide you with the Codio connection details for accessing the specific online lab environment. The lab environment consists of a Linux virtual machine that has MySQL, Apache Hadoop, and Apache Hive. The work will be performed using a command line interface (CLI) within a Linux Terminal window.
Complete Lab Exercise 2 (starts on page 12) contained in the following lab document:
IT350 Codio Big Data Labs
In a Microsoft Word document, describe your experience of completing this lab exercise in 250–300 words.
In addition to the Word document, you are required to provide the screen.log file and a comma separated value (CSV) file as part of the assignment submission. Details on the screen.log and CSV files are contained in the lab document. The submitted screen.log and CSV files provide verification of the completed lab work.

Use the Excel data set I sent you and import it into a table to make a graph. Th

Use the Excel data set I sent you and import it into a table to make a graph. Th

Use the Excel data set I sent you and import it into a table to make a graph. Then randomly select a city in the data set to analyze it, write one or two SQL statements, and then make a video.

Unit 9 Assignment: Exploring the Hadoop Environment Outcomes addressed in this a

Unit 9 Assignment: Exploring the Hadoop Environment
Outcomes addressed in this a

Unit 9 Assignment: Exploring the Hadoop Environment
Outcomes addressed in this activity:
Unit Outcomes:
Migrate structured data from a MySQL database into the Apache Hadoop Distributed File System (HDFS).
Perform data analysis using Apache Hive.
Course Outcome:
IT350-6: Explore non-relational database alternatives.
Purpose
Structured data entails data that is in a standardized format, has a well-defined structure, complies to a data model, follows a persistent order, and is easily accessed by humans and programs. Structured data consists of clearly defined data types with patterns that make them easily searchable. This data type is generally stored in a relational database.
You will migrate structured data from an existing MySQL relational database to the Apache Hadoop Distributed File System (HDFS). You will perform basic data analysis by querying the migrated data in Apache Hive. Apache Hive is a data warehouse software project built on top of Apache Hadoop for providing data query and analysis functionality.Assignment Instructions
Navigate to the Academic Tools area of this course and select Library, then Required Readings to review the Unit 9 videos covering facets associated with Hadoop. It is very important that you watch the Unit 9 videos before completing the assignment.
The assignment work will be performed within Codio’s cloud-based learning environment. Navigate to this course’s main menu and select Codio to access this platform.
Your course instructor will provide you with the Codio connection details for accessing the specific online lab environment. The lab environment consists of a Linux virtual machine that has MySQL, Apache Hadoop, and Apache Hive. The work will be performed using a command line interface (CLI) within a Linux Terminal window.
Complete Lab Exercise 1 contained in the following lab document:
IT350 Codio Big Data Labs
In a Microsoft Word document, describe your experience of completing this lab exercise in 250–300 words.
In addition to the Word document, you are required to provide the screen.log file and two comma separated value (CSV) files as part of the assignment submission. Details on the screen.log and CSV files are contained in the lab document. The submitted screen.log and CSV files provide verification of the completed lab work.

CHECK ATTACHED FILE 1. open the attached file, 2. you will find a file named “

CHECK ATTACHED FILE
1. open the attached file,
2. you will find a file named “

CHECK ATTACHED FILE
1. open the attached file,
2. you will find a file named “day3”
3. open “day3” file
4. you will find 2 files solve one of them.
submit the answer:
1. fill the word document
2. send the .sql file