Monday, April 29, 2019

Footprinting and Reconnaissance


Footprinting and Reconnaissance
What the hell is Footprinting?
Well in a layman and simple language “ Foot Printing in Security terms is the process to gather as much possible information about the Target Organization.”
Information like:-
Physical Location, Domain, Number of Employees, URL’s, VPN’s, Phone Numbers, IP Address etcetra.
Footprinting Threats?
1.     Social Engineering:- The easiest of all and can be done without any tool.
2.   Business Loss
3.   Corporate Espionage
4.   Information Leak
5.    System and Network Attack
Objectives of Footprinting
1.     Collect Network Information
2.   Collect System Information
3.   Collect Organizations Information
Footprinting Different Methods.
1.     Footprinting through Social Media, this one is the most easiest to do mostly attacker will create fake account/ids and tries to gather as much as possible information about the target Organization.
2.   Footprinting through Search Engines like bing, google and duckduckgo. My favorite is duckduckgo. Attackers also look for cache and archives. Some of the good tools are netcraft, shodan, pipl, Google Earth. in order to perform footpriting.
3.   Footprinting through the Job sites. Hackers will come to know what tools and technology organization is working on.
4.   Target Monitoring through the Alerts like google alerts, twitter alerts, yahoo alerts.
5.    Another good method is via Google Hacking databases and Advance search queries. Query string can be used in search and can be used as keywords. Also Google Advance Search Operators can be utilized. For example “intitle index of” list down all the sites with index open. securityfocus.com, hackersforcharity.org/ghdb are few sites where you can get most of the info.
6.   Website footprinting is monitoring the target organization website. Web server details, directory structure, developers email id are some of the common info. Also tools available where we can mirror the whole website. Backdated website information can be extracted from archive.org.
7.    Email tracking is used to track the emails. Emails are used to gather information in order to perform the social engineering and many other attacks, Spam.
8.   DNS Information attackers can get the hosts in the network. Hackers can get A, CNAME, PTR, MX, NS, HINFO records. There are lot of command line utilities available to get the DNS information. nslookup and dig are the most common among the tools.
9.   WHOis attackers perform WHOis to understand who is behind a specific domain? ARIN, AFRINIC, RIPE. APNIC, LATNIC are the RIR’s (Regional Internet Registry). We can get info from WHOis like email, domain owner, address, name servers for the domain, registrar.
10.       Network Footprinting
11. Footprinting through Social Engineering. Eavesdropping, Shoulder Surfing, Dumpster Diving.


Top 5 Skills Of A Six Sigma Practitioner

Top 5 Skills Of A Six Sigma Practitioner

In today’s competitive world, companies are not only looking for highly skilled professionals but they are also very keenly looking for Six Sigma professionals. As these certified professionals are well aware of all the Six Sigma methodologies and principles that will be beneficial for companies .These professionals possess the ability to quickly identify the loopholes in the business process, which in turn will eliminate waste and also reduce overhead costs. 
Six Sigma methodologies are suitable for small as well as large business establishments. Six Sigma principles are not confined to one particular industry, these principles and methodologies can be adopted by all sectors of industries. It can be applied in manufacturing industries and also service based industries.
To withstand fierce competition in the global market, companies are offering top of the line service and also focusing on the price and quality of the product. For this reason, companies are looking to minimize additional costs as much as possible and ensure that they don’t  Iincur additional costs in manufacturing a product, which will help organizations to maintain a competitive price to attract more customers.
Six Sigma practitioners can come from different professional backgrounds, they can be from a product based company or a service based company. Though, they come from different professional backgrounds they should have  the essential skillswhich are common to all professions. Listed below are some of the skills of a Six Sigma professional:
An efficient Six Sigma practitioner should always carry a positive attitude, and must be a firm believer that they can achieve the tasks and goals of the project. A Six Sigma practitioner must be a role model for their team members and they should be able to motivate their team members. If necessary they should also provide assistance to the team members who experience some difficulties in understanding the subject.  
Another critical characteristic of a Six Sigma practitioner is that they should always be ready to take the initiative. As an efficient Six Sigma practitioner, they must be able to manage project team and encourage them to work to their complete potential. 
3. Good Communication Skills
Six Sigma practitioner must be a good communicator. They must have good oral communication skills and must convey information to the team members as well as top officials in clear and understandable manner. If any of them faces difficulty in understanding the information that can personally explain to them by calling them on the phone or through email. Apart from good communication skills Six Sigma practitioner must also possess good writing and presentation skills. As they carry the responsibilities of generating project status reports, which will be shared with the team members as well as stakeholders.
4. Good Understanding of Business Process 
It is important that Six Sigma practitioners understand all the aspects of the business process in order to deliver better results. This is also important because they will be of help  to new joiners as well as other existing team members if they have any queries regarding the project. This will also help them to identify the loopholes in the business process and chalk out a plan to fix them . This will improve the business process , making the team more efficient and produce timely results.
5. Management Skills
One of the important characteristic of a good Six Sigma practitioner is that they possess the right knowledge about who to report in case of any discrepancies in the project development process. Usually, discrepancies can be regarding the budget, resources, or new recommendations from the client. They must report all these issues to the higher officials and stakeholders who are the decision makers. They must be competent enough to effectively convey information to stakeholders and also take feedback and implement them effectively. In other words, they should be able to manage, stakeholders and team in an efficient manner.
Six Sigma professionals are in huge demand as all the companies want to reduce their cost overheads and enhance their profit potential. Six Sigma practitioners have the knowledge of bringing down the waste to zero level and also have the knowledge to make optimum use of the available resources. 

What is Networking Hardware?


What is Networking Hardware?
Network hardware is that the individual elements of a network system that square measure accountable for transmittal information and facilitating the operations of a network. though a network contains several hardware elements, there square measure many basic classes that conjure the entire operations of a network system. Here square measure a number of the various classes and the way they contribute as a full to the functioning of a network system.



Network Servers:  
 One or more network servers is a part of nearly every local area network.These are very fast computers with a large amount of RAM and storage space, along with a one or more fast network interface card(s). The network operating system provides tools to share server resources and information with network users.The network server may be responding to requests from many network users simultaneously. For example, it may be asked to load a word processor program to one workstation, receive a database file from another workstation, and store an e-mail message during the same time period. 


Sunday, April 28, 2019

Data Protection Commission Investigates Facebook


Image result for Data Protection Commission Investigates Facebook
After Facebook alerted the Data Protection Commission (DPC) that it had found hundreds of millions of user passwords stored in its internal servers in plain text format, DPC launched an investigation to determine whether the company had acted in compliance with the General Data Protection Regulation (GDPR), according to an April 25 press release.
According to its website, the DPC is the Irish supervising authority for GDPR and is the national independent authority charged with data protection rights of individuals in the EU.
“The Data Protection Commission was notified by Facebook that it had discovered that hundreds of millions of user passwords, relating to users of Facebook, Facebook Lite and Instagram, were stored by Facebook in plain text format in its internal servers. We have this week commenced a statutory inquiry in relation to this issue to determine whether Facebook has complied with its obligations under relevant provisions of the GDPR,” a statement from the DPC said.
Though a Facebook spokesperson told Business Insider, “We are working with the IDPC on their inquiry. There is no evidence that these internally stored passwords were abused or improperly accessed," the accidental mishandling of these passwords could result in a multi-billion-dollar fine for the social media company, according to the news outlet.
The news comes only days after Facebook said it had unintentionally uploaded – without consent – the emails of 1.5 million users. Earlier this month, Infosecurity also reported that over half a billion Facebook records were leaked by third-party app developers.
Facebook announced on March 21, 2019, that it had found some passwords being stored in readable format on its internal data storage systems, and the company updated that post on April 18 to add: “Since this post was published, we discovered additional logs of Instagram passwords being stored in a readable format. We now estimate that this issue impacted millions of Instagram users. We will be notifying these users as we did the others. Our investigation has determined that these stored passwords were not internally abused or improperly accessed.”

Top 5 Ingredients For Developing Cloud Native Applications


Top 5 Ingredients For Developing Cloud Native Applications
Cloud native application is a growing trend that promises to develop as well as deploy applications faster and in a cost-efficient way by leveraging cloud services to get the runtime platform capabilities like scalability, performance and security. More than 60% of enterprises (worldwide) are building their applications strategy on cloud platforms.The number is expected to double by the end of the year 2020. All the companies that are tech-savvy are deploying path-breaking technology solutions and cloud-native applications, this is helping them stay well ahead of their competitors.
Here are some of the Ingredients for developing cloud-native applications.

Microservice Architecture Pattern

The microservice architecture pattern aims in delivering self-contained software products that implement a single capability, which can be developed individually and can be tested and deployed for propelling the agility and time to market of the end product. While designing the microservices, it is extremely important to design it to run cloud-native. A major challenge in designing the cloud-native application is the ability to operate the application in a distributed environment across the application instances.

Automating the Cloud Platform

A cloud platform offers platform services in order to run the business applications, it manages the network and the security access within the cloud environment. Cloud is very cost effective and does not require a huge capital investment to set up and manage the data centre. The elasticity of the cloud platform lets companies start small and grow limitless but only for the infrastructure.
A cloud platform which cannot be completely automatically provisioned, cannot provide the speed and agility promised by running cloud-native applications. One will have to follow the infrastructure as code pattern, which prescribes to define the infrastructure in the similar way that one writes software programs in code. This lets you reuse some of the widely used best practices for testing purposes, version control, and software – lifecycle management.
You can also build, test and deploy the application with minimal human interaction required. Combining this with infrastructure as code into a continuous delivery pipeline will give the provisioning speed that is needed to fulfil the promises of the cloud-native apps.

Serverless Functions

These are single purpose, small functions that are caused by events without the need to manage the runtime environment. With the serverless functions, asynchronous event-driven architectures can be built using the single purpose microservices. Functions are usually developed in common languages like Node.js or Java.
The term serverless could be a bit misleading as the code also runs on servers, the key difference is that the cloud provider takes care of spinning up instances for running the code and scaling out under load. Therefore, FaaS (Function as a Service) is a popularly used synonym. The typical user’s cases for FaaS are the use cases that are triggered by the event in a cloud like the arrival of an event on a queue, processing the log events or the execution of the scheduled tasks.
Apart from FaaS, the cloud providers offer many other serverless functions that can be used by the cloud-native application. Think of services for data storage at scale, IoT services, Big data and stream analysis services. These services are usually built from open source solutions. The benefit of using these cloud services in the cloud-native applications is that you do not have to manage the platform for these services, which is often complex by nature requires specific skills. These offerings allow you to focus on the functionality without having to worry about setting up or running the software.

A shift in the DevOps Culture

Adapting to cloud-native applications is not just a tech-driven change, it also needs a culture change. Automation of the entire software delivery pipeline is possible only if the development and the operations do not work in silos, but function as a team with shared responsibility, delivering a software product from a concept.
In reality, one will see that the operations consist of platform engineers that are responsible for the development, automation as well as an operation of the cloud platform. The development will consist of developers who build apps that can run on a cloud platform. Adopting the DevOps principles is another major ingredient for the cloud-native applications. The product teams that comprise the operations and development will ensure that the software, as well as the delivery pipeline, operates with the right speed.

Cloud Reliability Engineering

This is a concept derived from the Google site reliability engineering. It is based on the idea of approaching a platform administration as software engineering. Adopting the concepts of infrastructure as code not only automates the platform provisioning and application deployment but also makes the platform self-healing as well as reliable.
Transforming the Ops team to the platform engineering team, which constantly works on improvising the performance reliability with the help of principles that are defined by site reliability engineering is critical for achieving this goal.
Cloud native applications help firms in staying well ahead of their competitors by enabling them to deliver innovation at a much faster pace and in a cost-effective manner. If companies bring in the people and culture aspect while building cloud-native apps, rather than just sticking to the right technology or tools, the business will become more effective.


Friday, April 26, 2019

Six Sigma Green Belt Vs Six Sigma Black Belt


Six Sigma is a Quality Management Strategy used to identify the Defects in a process and rectify them in order to improve the the Customer satisfaction and Profitability of the organization by implementing Quality guidelines. Six Sigma teams contains different levels of members according to the Classification of Six Sigma including Green Belt and Black Belt.
 Six Sigma Black Belt Certification is a Higher level certification any individual can gain after a Six Sigma Green Belt. The key difference between the Green Belt and black Belt lies here.  Six Sigma Green Belt explains the Key Fundamentals, Tips and Techniques of Six Sigma for the people who are working on their first Six Sigma Projects. Define, Measure, Analyze, Improve and Control are the concepts explained including Graphical analysis techniques and Lean Six Sigma Processes. A Certified Green belt assists with analysis and Data collection for Black Belt Projects and Leads Smaller Projects. Most of the Organizations believe Green belts are the future of the company. The main reason is they gain experience by working with stake holders in all the levels in the company. Six Sigma Black Belt is for the people who already gained a Six Sigma Green Belt Certification and moved forward to work on number of projects. In Black Belt Certification they get trained on Deeper analysis tools. Their syllabus includes Green Belt material with some instructions on Stakeholder influencing, Change Management and Advanced Statistical tools. A Black Belt should have a Above average Analytical skills, a little Computer proficient and a good level in Computer analysis and basic math. A Certified Black belt works on Problem-Solving Projects or Cross Functional projects in all the departments of the organization. They are a good Project Leaders and Excellent Communicators. However Master Black belt is above all and trained about 20 and above Green belts and Black belts. Finally the Six Sigma Green belt and Black Belt holders are Excellent Problem Solvers.

CCNA – Operation Of IP Data Networks


CCNA – Operation Of IP Data Networks

There are tons of books written on the OSI and TCP/IP model so I won’t describe these models in depth here. What I will do is explain what you need to know at each level and explain how the real world works. We have two models, one from OSI and one from DOD.

In the real life everyone references the OSI model. I’ve never heard anyone reference the DOD model which doesn’t mean it doesn’t have its merits but everyone always uses the OSI model as a reference.
The OSI model has seven layers but people sometimes joke that layer 8 is financial and layer 9 is political.
Starting out with the physical layer, what you need to know is auto negotiation. Auto negotiation is good, hard coding speed and duplex will no doubt lead to ports that are hard coded on one side and auto on the other side to end up in half duplex. Gone are the days when auto negotiation wasn’t compatible and lead to misconfigured ports. Very very rarely does auto negotiation fail, until proven otherwise, always use auto negotiation. If you disable auto negotiation, did you know that you also disabled some of the error checking mechanisms of Ethernet such as Remote Fault Indication (RFI)?
At the data link level you should be comfortable with MAC addresses and hexadecimal numbers. Learn how the MAC address is built with the Organizational Unique Identifier (OUI). Often when troubleshooting it is useful to check the OUI of a MAC address to know what is connected to a port. Is it a Cisco device or a PC for example. Learn how easy it is to spoof a MAC. How can you perform an man in the middle attack? How do you protect against that? Learn about port security, Dynamic ARP Inspection, DHCP snooping and so on. Proper layer two security is critical in networks.
At the network layer you MUST know IP addressing. Throw away the subnet calculator and learn how to calculate subnets, usable hosts, subnet ID and the broadcast address manually. This will be the best thing you’ve ever done. If you don’t know subnetting by heart you’ll never become a really skilled network engineer. Everything depends on you knowing IP addressing, calculating wild cards, understanding routing, configuring firewalls and so on.
At the transport layer you must understand the differences between UDP and TCP. Why can UDP utilize the bandwidth fully on my link but not TCP? What is the window size? What are sequence numbers? Why does multicast use UDP? Ask these questions and learn UDP and TCP properly. This will immensly help you in your career down the line. Take the time to really learn TCP/IP and how the windowing mechanism works, what slow start is, why packet loss is really bad for TCP and what the Bandwidth Delay Product (BDP) is. It is also important to understand things such as CEF polarization. How is load sharing performed on Etherchannels? What algorithm can I use to get a better distributed load?
I will group the remaining layers session, presentation and application into one. Learn about different applications such as HTTP, FTP and other applications. You will need to have a good understanding of what ports are used and how the communication is performed. Why does FTP use one port for initial setup and one for transfer? What is passive FTP? The more you understand about applications the better you will be able to help system administrators when they have issues, and they will… Understand how to use Wireshark, why is my TCP performing so badly? What are these duplicate ACKs?


Thursday, April 25, 2019

Azure Active Directory Hybrid Identity Design Considerations


Azure Active Directory Hybrid Identity Design Considerations
Consumer-based devices are proliferating the corporate world, and cloud-based software-as-a-service (SaaS) applications are easy to adopt. As a result, maintaining control of users’ application access across internal datacenters and cloud platforms is challenging.
Microsoft’s identity solutions span on-premises and cloud-based capabilities, creating a single user identity for authentication and authorization to all resources, regardless of location. This concept is known as Hybrid Identity. There are different design and configuration options for hybrid identity using Microsoft solutions, and in some case it might be difficult to determine which combination will best meet the needs of your organization.
This Hybrid Identity Design Considerations Guide will help you to understand how to design a hybrid identity solution that best fits the business and technology needs for your organization. This guide details a series of steps and tasks that you can follow to help you design a hybrid identity solution that meets your organization’s unique requirements. Throughout the steps and tasks, the guide will present the relevant technologies and feature options available to organizations to meet functional and service quality (such as availability, scalability, performance, manageability, and security) level requirements.
Specifically, the hybrid identity design considerations guide goals are to answer the following questions:
·         What questions do I need to ask and answer to drive a hybrid identity-specific design for a technology or problem domain that best meets my requirements?
·         What sequence of activities should I complete to design a hybrid identity solution for the technology or problem domain?
·         What hybrid identity technology and configuration options are available to help me meet my requirements? What are the trade-offs between those options so that I can select the best option for my business?
Who is this guide intended for?
CIO, CITO, Chief Identity Architects, Enterprise Architects, and IT Architects responsible for designing a hybrid identity solution for medium or large organizations.
How can this guide help you?
You can use this guide to understand how to design a hybrid identity solution that is able to integrate a cloud-based identity management system with your current on-premises identity solution.
The following graphic shows an example a hybrid identity solution that enables IT Admins to manage to integrate their current Windows Server Active Directory solution located on-premises with Microsoft Azure Active Directory to enable users to use Single Sign-On (SSO) across applications located in the cloud and on-premises.

The above illustration is an example of a hybrid identity solution that is leveraging cloud services to integrate with on-premises capabilities in order to provide a single experience to the end-user authentication process and to facilitate IT managing those resources. Although this example can be a common scenario, every organization’s hybrid identity design is likely to be different than the example illustrated in Figure 1 due to different requirements.
This guide provides a series of steps and tasks that you can follow to design a hybrid identity solution that meets your organization’s unique requirements. Throughout the following steps and tasks, the guide presents the relevant technologies and feature options available to you to meet functional and service quality level requirements for your organization.
Assumptions: You have some experience with Windows Server, Active Directory Domain Services, and Azure Active Directory. In this document, it is assumed you are looking for how these solutions can meet your business needs on their own, or in an integrated solution.


Google: Immediately Remove These 6 Apps From Your Smartphone

Google: Immediately Remove These 6 Apps From Your Smartphone

According to the latest reports, the tech giant Google has removed six well-known apps from the Google Play Store simply for committing ad fraud. Basically, all these applications simply run in the background and generate fake clicks on advertisements.

Google: Immediately Remove These 6 Apps From Your Smartphone

The tech giant Google has removed six well-known apps from one more Chinese Developer from the Play Store for committing ad fraud. And this time the company involved is the DU Group, divided by the giant Baidu, which still holds a 34% stake in the company.
The most interesting thing is that the most popular Selfie Camera on the list had 50 million downloads in the Android app store, of course, Google Play Store. The DU Group is not the first to get involved in this type of scheme, as in November 2018 Cheetah Mobile’s Android apps, like Clean Master and Battery Doctor, were charged with fraud.
Basically, all these applications simply run in the background and generate fake clicks on advertisements, hence, causing advertiser harm and fraudulent revenue to the developer.
According to the well-known media portal, of course, I am talking about BuzzFeed, at least six DU Group applications make fraudulent clicks on ads, and here they are Selfie Camera, Total Cleaner, Smart Cooler, RAM Master, AIO Flashlight and Omni Cleaner.
Among all these six apps, five apps have more than 10 million downloads each. While the apps, which have been downloaded more than 90 million times, are now no longer available in the Google Play Store.
The application development team itself documented fraud, as one of the codes “verified that the user had not yet clicked on an ad and then clicked at random intervals.” As the clicks happened even when the application was not open, hence, the device’s battery and data could be consumed more quickly.
Apart from committing fraud, the applications were also asking for invasive permissions as well. As the well-known app, AIO Flashlight, whose only function is to act as a flashlight, but, it requests 31 permissions and seven of which were considered dangerous, simply for providing access to sensitive data.
While if you will compare this app with the other apps that are available in the Play Store, then you will find that they only need two permissions to work. Another feature of the DU Group applications is that the privacy policy was hosted on random pages on Tumblr, like “dreamilyswimmingwizard.tumblr.com” or “superiorzzr.tumblr.com”.
However, in a note, the tech giant Google stated that it has removed fraudulent apps from the Play Store and blacklisted them. Hence, as a result, they will no longer be able to generate revenue from ads on the Google AdMob platform. So, what do you think about this? Simply share all your views and thoughts in the comment section below. And if you liked this post then simply do not forget to share this post with your friends and family.

Telecom Fraud Scams on the Rise

Image result for Telecom Fraud Scams on the Rise

From the EU to Texas, law enforcement and security professionals are warning that the telecom threat landscape is evolving as fraudsters leverage telecom infrastructure to conduct network-based fraud attacks, according to multiple sources.
Infosecurity reported that according to the Cyber-Telecom Crime Report 2019 published by Europol and Trend Micro telecoms fraud costs the industry and end customers over €29bn ($33bn) each year. The report found that the evolution from switchboard operators to packet-switched networks and circuit switched networks in telecommunications has broadened the telecom threat landscape. As a result, criminals are supplanting traditional financial crimes with telecom fraud. 
While the report found that telecom fraud is increasingly originating from developing nations or failed economies, multiple media outlets across the US have warned of scam calls that are making their way around the country.
In both Ector County, Texas, and Middlesex County, Massachusetts, the sheriffs’ offices warned residents about a call scam that claims to be originating from the sheriff’s office. An audio clip tells the recipient that they failed to report to jury duty and must resolve this matter with urgency.
“Nationwide, these scammers are attempting to use the criminal justice system and the threat of arrest as a tool to frighten people into paying large sums of money,” said Middlesex Sheriff Peter J. Koutoujian told 7 News. “We want residents to be aware of these scams and these tactics in order to better protect themselves.”
Likewise, the state of Washington has also seen a rise in these phone scams, and reporter David Rasbach of the Bellingham Herald warned: “Scammers often try to disguise their identities by spoofing the information that appears in your call identification display and trick you into answering. They use local area codes, numbers that may look familiar or even impersonate a legitimate business, utility or government agency.”

Wednesday, April 24, 2019

System Components


System Components

A modern PC is both simple and complicated. It is simple in the sense that over the years, many of the components used to construct a system have become integrated with other components into fewer and fewer actual parts. It is complicated in the sense that each part in a modern system performs many more functions than did the same types of parts in older systems.
This section briefly examines all the components and peripherals in a modern PC system. Each item is discussed further in later chapters.
Here are the components and peripherals necessary to assemble a basic modern PC system:
  • Motherboard
  • Processor
  • Memory (RAM)
  • Case/chassis
  • Power supply
  • Floppy drive
  • Hard disk
  • CD-ROM, CD-RW, or DVD-ROM drive
  • Keyboard
  • Mouse
  • Video card
  • Monitor (display)
  • Sound card
  • Speakers
  • Modem
·         Basic PC Components
Component
Description
Motherboard
The motherboard is the core of the system. It really is the PC; everything else is connected to it, and it controls everything in the system. Microprocessors are covered in detail in Chapter 3, "Microprocessor Types and Specifications."
Processor
The processor is often thought of as the "engine" of the computer. It's also called the CPU (central processing unit).
Memory (RAM)
The system memory is often called RAM (for random access memory). This is the primary memory, which holds all the programs and data the processor is using at a given time. Memory is covered in detail in Chapter 6, "Memory."
Case/chassis
The case is the frame or chassis that houses the motherboard, power supply, disk drives, adapter cards, and any other physical components in the system. The case is covered in detail in Chapter 21, "Power Supply and Chassis/Case."
Power supply
The power supply is what feeds electrical power to every single part in the PC. The power supply is covered in detail in Chapter 21.
Floppy drive
The floppy drive is a simple, inexpensive, low-capacity, removable-media, magnetic storage device.
Hard drive
The hard disk is the primary archival storage memory for the system. Hard disk drives are also covered in detail in Chapter 10, "Hard Disk Storage."
CD-ROM/DVD-ROM
CD-ROM (compact disc read-only) and DVD-ROM (digital versatile disc read-only) drives are relatively high-capacity, removable media, optical drives. These drives are covered in detail in Chapter 13, "Optical Storage."
Keyboard
The keyboard is the primary device on a PC that is used by a human to communicate with and control a system. Keyboards are covered in detail in Chapter 18, "Input Devices."
Mouse
Although many types of pointing devices are on the market today, the first and most popular device for this purpose is the mouse. The mouse and other pointing devices are covered in detail in Chapter 18.
Video card
The video card controls the information you see on the monitor. Video cards are covered in detail in Chapter 15, "Video Hardware."
Monitor
Monitors are covered in detail in Chapter 15.
Sound card
It enables the PC to generate complex sounds. Sound cards and speakers are covered in detail in Chapter 16, "Audio Hardware."
Modem
Most prebuilt PCs ship with a modem (generally an internal modem). Modems and other Internet-connectivity devices and methods are covered in Chapter 19, "Internet Connectivity."


Which Python course is best for beginners?

Level Up Your Python Prowess: Newbie Ninjas: Don't fret, little grasshoppers! Courses like "Learn Python 3" on Codecade...