# 2K Monitor Best of 2021

## 2K Monitor Best of 2021 – Quick List

1. Philips 322E1C – 2K Monitor Best of 2021 (Best Overall)
2. LG 27GL83A-B – 2K Monitor Best of 2021 (Best Gaming)
3. Dell P2419H 2K Monitor Best of 2021 (Best 24 inch)
4. Samsung C27F398 – 2K Monitor Best of 2021 (Best Curved)

## 2K Monitor Best of 2021

Welcome to the wtg buying guide for the 2K Monitor Best of 2021. In this guide we will efficiently go over the 2K Monitor Best ofs including: the 2K Monitor Best of overall, the 2K Monitor Best of for gaming, 2K Monitor Best of for work, and the best 2k curved monitor. We provide the valuable info to save you time, and to further inform when you purchasing the 2K Monitor Best of for you.

## 2K Monitor

2K is a general term referring to display devices (such as monitors) that have a horizontal resolution of approximately 2,000 pixels. The standard 2K resolution is 2048 x 1080. 2K can sometimes include similar resolutions such as: 2560 x 1440 (QHD or 1440p) and 1920 x 1080 (FHD or 1080p). 2K monitors are currently the most popular (and most prevalent) resolution for computer users.

Let’s go over the (in-depth) list for the 2K Monitor Best of.

## Philips 322E1C Monitor

Almost every monitor that Philips currently makes it great (seriously). The Philips 328E9FJAB takes numerous positive attributes to the next level with a considerably large 32 inch 2k curved screen. As we’ve mentioned in previous articles we believe that the standard size for monitors will continue to increase, similar to the expansion of the average TV screen.

It seems that essentially everyone now currently owns (or wants to own) a large screen TV. Larger monitors is also encouraged by the ability to watch your favorite streaming content on your pc. Of course there are additional benefits for work related tasks and gaming as well.

Philips is also well known for catering (in a very good way) to designers and photographers, with this particular monitor having a sRGB coverage of 122.6%. The color vibrancy and contrasts are palpable even to the untrained eye.

We’ve seen the Philips monitors go in and out of stock very frequently because of high demand, so you should get your hands on one while their still available.

## LG 27GL83A-B 144Hz Monitor

If you’re a gamer, how does a 2K monitor with a 144Hz refresh rate, 1ms response time, and G Sync sound? It should be music to ears, especially when it comes from a reputable brand such as LG. This literally is a monitor geared for gamers, and it has the tech to back it up.

You will be light years ahead of most of your competitors with the integrated technologies. The 1ms response time and 144Hz refresh rate will provide an essentially instantaneous response to your environment. Incredibly useful for first person shooters and other high speed action games.

The high color coverage of sRGB 99% ensures that you experience the game the way it was meant to be, with accurate vibrant colors. The integration of NVIDIA G Sync removes and reduces screen tearing and stuttering to ensure that you don’t miss a beat whilst in the heat of battle. The incredible contrasts give you an edge in areas with low light, where some enemies may literally not be able to see you, but you can see them.

If you are serious about gaming, you need a serious gaming monitor, and this is one of the best.

## Dell P2419H Monitor

It’s quite common to see an office complex filled with computer products from Dell. There’s actually very good reasons for that to be. Dell is a very solid player in the computer industry, and has continued to provide high end products for decades. If you like to get things done, or perhaps you’re working from home, why not bring the office to you?

The Dell P2418H is a compact display with minimal bezels and a minimal style that would look good on just about any desk. Your eyes will thank you with the included eye care technologies. This include anti-glare and proprietary ComfortView which reduces blue light and flickering.

The incredible affordability of this monitor makes it a perfect candidate for a dual monitor setup. You literally can pay the same amount for 2 of these monitors, as you would for 1 from another brand or model. Our pick for the best 2k 24 inch monitor can then convert into an incredible 48 inch setup.

## Samsung C27F398 Monitor

Samsung is no stranger to high quality displays (tvs, phones, monitors, etc.), there actually currently the world’s leader in the category. If you want a good sized curved monitor, from a brand you can easily trust, this is definitely one of them.

The Samsung C27F398 has a 1800R curvature which pulls you in to whatever your viewing. And that’s the point with a curved screen, you get an increased immersion and field of view, and all of the benefits associated to it.

For many computer users, once they purchase a curved monitor, all the future displays are curved as well. If you’re a business person, artist, streaming content enthusiast, gamer, or even all of the above, you will love this Samsung monitor.

## Related Articles

2K Monitor Best of
2K Monitor Best ofs
2k monitors
2k computer monitor
best 2k gaming monitor
cheap 2k monitor
good 2k monitors
2k pc monitor
best budget 2k monitor
cheap 2k monitors
2k computer monitor
2k pc monitor
monitor 2k
best 2k ips monitor
2k computer monitor
2k pc monitor
monitor 2k
best 2k ips monitor
monitor 2ktop 2k monitors
2K Monitor Best of for gaming 2016
best 2k ips monitor
2K Monitor Best of for gaming
best 2k 144hz monitor
cheapest 2k monitor
computer monitor 2k
32 inch monitor 2k
what is a 2k monitor
best 144hz 2k monitor
2k 24 inch monitor
best 27 inch 2k monitor
2k monitor reviews
32 inch 2k monitor
24 inch 2k monitor
best 2k gaming monitors
2k 32 inch monitor
2k monitor 144hz
27 inch 2k monitor
2k 27 inch monitor
best 2k gaming monitor 2019
2k resolution monitor
27 2k monitor
32 2k monitor
2k gaming monitors
27 inch monitor 2k
2k monitor sale
2k monitor resolution
2k ultra wide monitor
ultrawide 2k monitor
2k display
144hz 2k monitor
2k monitor 144hz 1ms
monitor 2k 144hz
2k monitor 144hz g sync
smallest 1440p monitor
best 24 inch ips gaming monitor
2K Monitor Best of 2018
cheap 2k monitors
2K Monitor Best of for gaming 2016
2k monitor
2K Monitor Best of
2K Monitor Best ofs
2k monitors
2k computer monitor
best 2k gaming monitor
cheap 2k monitor
good 2k monitors
2k pc monitor
best budget 2k monitor
cheap 2k monitors
monitor 2k
top 2k monitors
2K Monitor Best of for gaming 2016
best 2k ips monitor
2K Monitor Best of for gaming
best 2k 144hz monitor
cheapest 2k monitor
computer monitor 2k
32 inch monitor 2k
what is a 2k monitor
best 144hz 2k monitor
2k 24 inch monitor
best 27 inch 2k monitor
2k monitor reviews
32 inch 2k monitor
24 inch 2k monitor
best 2k gaming monitors
2k 32 inch monitor
2k monitor 144hz
27 inch 2k monitor
2k 27 inch monitor
best 2k gaming monitor 2019
2k resolution monitor
27 2k monitor
32 2k monitor
2k gaming monitors
27 inch monitor 2k
2k monitor sale
2k monitor resolution
2k ultra wide monitor
ultrawide 2k monitor
2k display
144hz 2k monitor
2k monitor 144hz 1ms
monitor 2k 144hz
2k monitor 144hz g sync
smallest 1440p monitor
best 24 inch ips gaming monitor
2K Monitor Best of 2018
cheap 2k monitors
2K Monitor Best of for gaming 2016
2k monitor
2K Monitor Best of
2K Monitor Best ofs
2k monitors
2k computer monitor
best 2k gaming monitor
cheap 2k monitor
good 2k monitors
2k pc monitor
best budget 2k monitor
cheap 2k monitors
monitor 2k
top 2k monitors
2K Monitor Best of for gaming 2016
best 2k ips monitor
2K Monitor Best of for gaming
best 2k 144hz monitor
cheapest 2k monitor
computer monitor 2k
32 inch monitor 2k
what is a 2k monitor
best 144hz 2k monitor
2k 24 inch monitor
best 27 inch 2k monitor
2k monitor reviews
32 inch 2k monitor
24 inch 2k monitor
best 2k gaming monitors
2k 32 inch monitor
2k monitor 144hz
27 inch 2k monitor
2k 27 inch monitor
best 2k gaming monitor 2019
2k resolution monitor
27 2k monitor
32 2k monitor
2k gaming monitors
27 inch monitor 2k
2k monitor sale
2k monitor resolution
2k ultra wide monitor
ultrawide 2k monitor
2k display
144hz 2k monitor
2k monitor 144hz 1ms
monitor 2k 144hz
2k monitor 144hz g sync
smallest 1440p monitor
best 24 inch ips gaming monitor
2K Monitor Best of 2018
cheap 2k monitors
2K Monitor Best of for gaming 2016
2k monitor
2K Monitor Best of
2K Monitor Best ofs
2k monitors
2k computer monitor
best 2k gaming monitor
cheap 2k monitor
good 2k monitors
2k pc monitor
best budget 2k monitor
cheap 2k monitors
monitor 2k
top 2k monitors
2K Monitor Best of for gaming 2016
best 2k ips monitor
2K Monitor Best of for gaming
best 2k 144hz monitor
cheapest 2k monitor
computer monitor 2k
32 inch monitor 2k
what is a 2k monitor
best 144hz 2k monitor
2k 24 inch monitor
best 27 inch 2k monitor
2k monitor reviews
32 inch 2k monitor
24 inch 2k monitor
best 2k gaming monitors
2k 32 inch monitor
2k monitor 144hz
27 inch 2k monitor
2k 27 inch monitor
best 2k gaming monitor 2019
2k resolution monitor
27 2k monitor
32 2k monitor
2k gaming monitors
27 inch monitor 2k
2k monitor sale
2k monitor resolution
2k ultra wide monitor
ultrawide 2k monitor
2k display
144hz 2k monitor
2k monitor 144hz 1ms
monitor 2k 144hz
2k monitor 144hz g sync
smallest 1440p monitor
best 24 inch ips gaming monitor
2K Monitor Best of 2018
cheap 2k monitors
2K Monitor Best of for gaming 2016
2k monitor
2K Monitor Best of
2K Monitor Best ofs
2k monitors
2k computer monitor
best 2k gaming monitor
cheap 2k monitor
good 2k monitors
2k pc monitor
best budget 2k monitor
cheap 2k monitors
monitor 2k
top 2k monitors
2K Monitor Best of for gaming 2016
best 2k ips monitor
2K Monitor Best of for gaming
best 2k 144hz monitor
cheapest 2k monitor
computer monitor 2k
32 inch monitor 2k
what is a 2k monitor
best 144hz 2k monitor
2k 24 inch monitor
best 27 inch 2k monitor
2k monitor reviews
32 inch 2k monitor
24 inch 2k monitor
best 2k gaming monitors
2k 32 inch monitor
2k monitor 144hz
27 inch 2k monitor
2k 27 inch monitor
best 2k gaming monitor 2019
2k resolution monitor
27 2k monitor
32 2k monitor
2k gaming monitors
27 inch monitor 2k
2k monitor sale
2k monitor resolution
2k ultra wide monitor
ultrawide 2k monitor
2k display
144hz 2k monitor
2k monitor 144hz 1ms
monitor 2k 144hz
2k monitor 144hz g sync
smallest 1440p monitor
best 24 inch ips gaming monitor
2K Monitor Best of 2018
cheap 2k monitors
2K Monitor Best of for gaming 2016
2k monitor
2K Monitor Best of
2K Monitor Best ofs
2k monitors
2k computer monitor
best 2k gaming monitor
cheap 2k monitor
good 2k monitors
2k pc monitor
best budget 2k monitor
cheap 2k monitors
monitor 2k
top 2k monitors
2K Monitor Best of for gaming 2016
best 2k ips monitor
2K Monitor Best of for gaming
best 2k 144hz monitor
cheapest 2k monitor
computer monitor 2k
32 inch monitor 2k
what is a 2k monitor
best 144hz 2k monitor
2k 24 inch monitor
best 27 inch 2k monitor
2k monitor reviews
32 inch 2k monitor
24 inch 2k monitor
best 2k gaming monitors
2k 32 inch monitor
2k monitor 144hz
27 inch 2k monitor
2k 27 inch monitor
best 2k gaming monitor 2019
2k resolution monitor
27 2k monitor
32 2k monitor
2k gaming monitors
27 inch monitor 2k
2k monitor sale
2k monitor resolution
2k ultra wide monitor
ultrawide 2k monitor
2k display
144hz 2k monitor
2k monitor 144hz 1ms
monitor 2k 144hz
2k monitor 144hz g sync
smallest 1440p monitor
best 24 inch ips gaming monitor
2K Monitor Best of 2018
cheap 2k monitors
2K Monitor Best of for gaming 2016
2k monitor
2K Monitor Best of
2K Monitor Best ofs
2k monitors
2k computer monitor
best 2k gaming monitor
cheap 2k monitor
good 2k monitors
2k pc monitor
best budget 2k monitor
cheap 2k monitors
monitor 2k
top 2k monitors
2K Monitor Best of for gaming 2016
best 2k ips monitor
2K Monitor Best of for gaming
best 2k 144hz monitor
cheapest 2k monitor
computer monitor 2k
32 inch monitor 2k
what is a 2k monitor
best 144hz 2k monitor
2k 24 inch monitor
best 27 inch 2k monitor
2k monitor reviews
32 inch 2k monitor
24 inch 2k monitor
best 2k gaming monitors
2k 32 inch monitor
2k monitor 144hz
27 inch 2k monitor
2k 27 inch monitor
best 2k gaming monitor 2019
2k resolution monitor
27 2k monitor
32 2k monitor
2k gaming monitors
27 inch monitor 2k
2k monitor sale
2k monitor resolution
2k ultra wide monitor
ultrawide 2k monitor
2k display
144hz 2k monitor
2k monitor 144hz 1ms
monitor 2k 144hz
2k monitor 144hz g sync
smallest 1440p monitor
best 24 inch ips gaming monitor
2K Monitor Best of 2018
cheap 2k monitors
2K Monitor Best of for gaming 2016
2k monitor
2K Monitor Best of
2K Monitor Best ofs
2k monitors
2k computer monitor
best 2k gaming monitor
cheap 2k monitor
good 2k monitors
2k pc monitor
best budget 2k monitor
cheap 2k monitors
monitor 2k
top 2k monitors
2K Monitor Best of for gaming 2016
best 2k ips monitor
2K Monitor Best of for gaming
best 2k 144hz monitor
cheapest 2k monitor
computer monitor 2k
32 inch monitor 2k
what is a 2k monitor
best 144hz 2k monitor
2k 24 inch monitor
best 27 inch 2k monitor
2k monitor reviews
32 inch 2k monitor
24 inch 2k monitor
best 2k gaming monitors
2k 32 inch monitor
2k monitor 144hz
27 inch 2k monitor
2k 27 inch monitor
best 2k gaming monitor 2019
2k resolution monitor
27 2k monitor
32 2k monitor
2k gaming monitors
27 inch monitor 2k
2k monitor sale
2k monitor resolution
2k ultra wide monitor
ultrawide 2k monitor
2k display
144hz 2k monitor
2k monitor 144hz 1ms
monitor 2k 144hz
2k monitor 144hz g sync
smallest 1440p monitor
best 24 inch ips gaming monitor
2K Monitor Best of 2018
cheap 2k monitors
2K Monitor Best of for gaming 2016
2k monitor
2K Monitor Best of
2K Monitor Best ofs
2k monitors
2k computer monitor
best 2k gaming monitor
cheap 2k monitor
good 2k monitors
2k pc monitor
best budget 2k monitor
cheap 2k monitors
monitor 2k
top 2k monitors
2K Monitor Best of for gaming 2016
best 2k ips monitor
2K Monitor Best of for gaming
best 2k 144hz monitor
cheapest 2k monitor
computer monitor 2k
32 inch monitor 2k
what is a 2k monitor
best 144hz 2k monitor
2k 24 inch monitor
best 27 inch 2k monitor
2k monitor reviews
32 inch 2k monitor
24 inch 2k monitor
best 2k gaming monitors
2k 32 inch monitor
2k monitor 144hz
27 inch 2k monitor
2k 27 inch monitor
best 2k gaming monitor 2019
2k resolution monitor
27 2k monitor
32 2k monitor
2k gaming monitors
27 inch monitor 2k
2k monitor sale
2k monitor resolution
2k ultra wide monitor
ultrawide 2k monitor
2k display
144hz 2k monitor
2k monitor 144hz 1ms
monitor 2k 144hz
2k monitor 144hz g sync
smallest 1440p monitor
best 24 inch ips gaming monitor
2K Monitor Best of 2018
cheap 2k monitors
2K Monitor Best of for gaming 2016

## 1440p monitor

2K Monitor Best of 2021

## best 1440p monitor

Thales, the earliest known researcher into electricity

## best 1440p monitor

Long before any knowledge of electricity existed, people were aware of shocks from electric fishAncient Egyptian texts dating from 2750 BCE referred to these fish as the “Thunderer of the Nile“, and described them as the “protectors” of all other fish. Electric fish were again reported millennia later by ancient GreekRoman and Arabic naturalists and physicians.[2] Several ancient writers, such as Pliny the Elder and Scribonius Largus, attested to the numbing effect of electric shocks delivered by electric catfish and electric rays, and knew that such shocks could travel along conducting objects.[3] Patients suffering from ailments such as gout or headache were directed to touch electric fish in the hope that the powerful jolt might cure them.[4]

## best 1440p monitor

Ancient cultures around the Mediterranean knew that certain objects, such as rods of amber, could be rubbed with cat’s fur to attract light objects like feathers. Thales of Miletus made a series of observations on static electricity around 600 BCE, from which he believed that friction rendered amber magnetic, in contrast to minerals such as magnetite, which needed no rubbing.[5][6][7][8] Thales was incorrect in believing the attraction was due to a magnetic effect, but later science would prove a link between magnetism and electricity. According to a controversial theory, the Parthians may have had knowledge of electroplating, based on the 1936 discovery of the Baghdad Battery, which resembles a galvanic cell, though it is uncertain whether the artifact was electrical in nature.[9]

2K Monitor Best of 2021

## 1440p monitor

Benjamin Franklin conducted extensive research on electricity in the 18th century, as documented by Joseph Priestley (1767) History and Present Status of Electricity, with whom Franklin carried on extended correspondence.

## 1440p monitor

Electricity would remain little more than an intellectual curiosity for millennia until 1600, when the English scientist William Gilbert wrote De Magnete, in which he made a careful study of electricity and magnetism, distinguishing the lodestone effect from static electricity produced by rubbing amber.[5] He coined the New Latin word electricus (“of amber” or “like amber”, from ἤλεκτρον, elektron, the Greek word for “amber”) to refer to the property of attracting small objects after being rubbed.[10] This association gave rise to the English words “electric” and “electricity”, which made their first appearance in print in Thomas Browne‘s Pseudodoxia Epidemica of 1646.[11]

## 1440p monitor

Further work was conducted in the 17th and early 18th centuries by Otto von GuerickeRobert BoyleStephen Gray and C. F. du Fay.[12] Later in the 18th century, Benjamin Franklin conducted extensive research in electricity, selling his possessions to fund his work. In June 1752 he is reputed to have attached a metal key to the bottom of a dampened kite string and flown the kite in a storm-threatened sky.[13] A succession of sparks jumping from the key to the back of his hand showed that lightning was indeed electrical in nature.[14] He also explained the apparently paradoxical behavior[15] of the Leyden jar as a device for storing large amounts of electrical charge in terms of electricity consisting of both positive and negative charges.[12]

2K Monitor Best of 2021

## 1440p monitor

Michael Faraday‘s discoveries formed the foundation of electric motor technology.

## 1440p monitor

In 1791, Luigi Galvani published his discovery of bioelectromagnetics, demonstrating that electricity was the medium by which neurons passed signals to the muscles.[16][17][12] Alessandro Volta‘s battery, or voltaic pile, of 1800, made from alternating layers of zinc and copper, provided scientists with a more reliable source of electrical energy than the electrostatic machines previously used.[16][17] The recognition of electromagnetism, the unity of electric and magnetic phenomena, is due to Hans Christian Ørsted and André-Marie Ampère in 1819–1820. Michael Faraday invented the electric motor in 1821, and Georg Ohm mathematically analysed the electrical circuit in 1827.[17] Electricity and magnetism (and light) were definitively linked by James Clerk Maxwell, in particular in his “On Physical Lines of Force” in 1861 and 1862.[18]

## 1440p monitor

While the early 19th century had seen rapid progress in electrical science, the late 19th century would see the greatest progress in electrical engineering. Through such people as Alexander Graham BellOttó BláthyThomas EdisonGalileo FerrarisOliver HeavisideÁnyos JedlikWilliam Thomson, 1st Baron KelvinCharles Algernon ParsonsWerner von SiemensJoseph SwanReginald FessendenNikola Tesla and George Westinghouse, electricity turned from a scientific curiosity into an essential tool for modern life.

## 1440p monitor

In 1887, Heinrich Hertz[19]:843–44[20] discovered that electrodes illuminated with ultraviolet light create electric sparks more easily. In 1905, Albert Einstein published a paper that explained experimental data from the photoelectric effect as being the result of light energy being carried in discrete quantized packets, energising electrons. This discovery led to the quantum revolution. Einstein was awarded the Nobel Prize in Physics in 1921 for “his discovery of the law of the photoelectric effect”.[21] The photoelectric effect is also employed in photocells such as can be found in solar panels and this is frequently used to make electricity commercially.

## 1440p monitor

The first solid-state device was the “cat’s-whisker detector” first used in the 1900s in radio receivers. A whisker-like wire is placed lightly in contact with a solid crystal (such as a germanium crystal) to detect a radio signal by the contact junction effect.[22] In a solid-state component, the current is confined to solid elements and compounds engineered specifically to switch and amplify it. Current flow can be understood in two forms: as negatively charged electrons, and as positively charged electron deficiencies called holes. These charges and holes are understood in terms of quantum physics. The building material is most often a crystalline semiconductor.[23][24]

## 1440p monitor

Solid-state electronics came into its own with the emergence of transistor technology. The first working transistor, a germanium-based point-contact transistor, was invented by John Bardeen and Walter Houser Brattain at Bell Labs in 1947,[25] followed by the bipolar junction transistor in 1948.[26] These early transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis.[27]:168 They were followed by the silicon-based MOSFET (metal-oxide-semiconductor field-effect transistor, or MOS transistor), invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959.[28][29][30] It was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses,[27]:165,179 leading to the silicon revolution.[31] Solid-state devices started becoming prevalent from the 1960s, with the transition from vacuum tubes to semiconductor diodes, transistors, integrated circuit (IC) chips, MOSFETs, and light-emitting diode (LED) technology.

## 1440p monitor

The most common electronic device is the MOSFET,[29][32] which has become the most widely manufactured device in history.[33] Common solid-state MOS devices include microprocessor chips[34] and semiconductor memory.[35][36] A special type of semiconductor memory is flash memory, which is used in USB flash drives and mobile devices, as well as solid-state drive (SSD) technology to replace mechanically rotating magnetic disc hard disk drive (HDD) technology.

## 1440p monitor

Charge on a gold-leaf electroscope causes the leaves to visibly repel each other.

## 1440p monitor

The presence of charge gives rise to an electrostatic force: charges exert a force on each other, an effect that was known, though not understood, in antiquity.[19]:457 A lightweight ball suspended from a string can be charged by touching it with a glass rod that has itself been charged by rubbing with a cloth. If a similar ball is charged by the same glass rod, it is found to repel the first: the charge acts to force the two balls apart. Two balls that are charged with a rubbed amber rod also repel each other. However, if one ball is charged by the glass rod, and the other by an amber rod, the two balls are found to attract each other. These phenomena were investigated in the late eighteenth century by Charles-Augustin de Coulomb, who deduced that charge manifests itself in two opposing forms. This discovery led to the well-known axiom: like-charged objects repel and opposite-charged objects attract.[19]

## 1440p monitor

The force acts on the charged particles themselves, hence charge has a tendency to spread itself as evenly as possible over a conducting surface. The magnitude of the electromagnetic force, whether attractive or repulsive, is given by Coulomb’s law, which relates the force to the product of the charges and has an inverse-square relation to the distance between them.[37][38]:35 The electromagnetic force is very strong, second only in strength to the strong interaction,[39] but unlike that force it operates over all distances.[40] In comparison with the much weaker gravitational force, the electromagnetic force pushing two electrons apart is 1042 times that of the gravitational attraction pulling them together.[41]

## best 1440p monitor

Study has shown that the origin of charge is from certain types of subatomic particles which have the property of electric charge. Electric charge gives rise to and interacts with the electromagnetic force, one of the four fundamental forces of nature. The most familiar carriers of electrical charge are the electron and proton. Experiment has shown charge to be a conserved quantity, that is, the net charge within an electrically isolated system will always remain constant regardless of any changes taking place within that system.[42] Within the system, charge may be transferred between bodies, either by direct contact, or by passing along a conducting material, such as a wire.[38]:2–5 The informal term static electricity refers to the net presence (or ‘imbalance’) of charge on a body, usually caused when dissimilar materials are rubbed together, transferring charge from one to the other.

## best 1440p monitor

The charge on electrons and protons is opposite in sign, hence an amount of charge may be expressed as being either negative or positive. By convention, the charge carried by electrons is deemed negative, and that by protons positive, a custom that originated with the work of Benjamin Franklin.[43] The amount of charge is usually given the symbol Q and expressed in coulombs;[44] each electron carries the same charge of approximately −1.6022×10−19 coulomb. The proton has a charge that is equal and opposite, and thus +1.6022×10−19  coulomb. Charge is possessed not just by matter, but also by antimatter, each antiparticle bearing an equal and opposite charge to its corresponding particle.[45]

## best 1440p monitor

Charge can be measured by a number of means, an early instrument being the gold-leaf electroscope, which although still in use for classroom demonstrations, has been superseded by the electronic electrometer.[38]:2–5

### Electric current

The movement of electric charge is known as an electric current, the intensity of which is usually measured in amperes. Current can consist of any moving charged particles; most commonly these are electrons, but any charge in motion constitutes a current. Electric current can flow through some things, electrical conductors, but will not flow through an electrical insulator.[46]

## best 1440p monitor

By historical convention, a positive current is defined as having the same direction of flow as any positive charge it contains, or to flow from the most positive part of a circuit to the most negative part. Current defined in this manner is called conventional current. The motion of negatively charged electrons around an electric circuit, one of the most familiar forms of current, is thus deemed positive in the opposite direction to that of the electrons.[47] However, depending on the conditions, an electric current can consist of a flow of charged particles in either direction, or even in both directions at once. The positive-to-negative convention is widely used to simplify this situation.

best 1440p monitor

## best 1440p monitor

An electric arc provides an energetic demonstration of electric current.

## best 1440p monitor

The process by which electric current passes through a material is termed electrical conduction, and its nature varies with that of the charged particles and the material through which they are travelling. Examples of electric currents include metallic conduction, where electrons flow through a conductor such as metal, and electrolysis, where ions (charged atoms) flow through liquids, or through plasmas such as electrical sparks. While the particles themselves can move quite slowly, sometimes with an average drift velocity only fractions of a millimetre per second,[38]:17 the electric field that drives them itself propagates at close to the speed of light, enabling electrical signals to pass rapidly along wires.[48]

## best 1440p monitor

Current causes several observable effects, which historically were the means of recognising its presence. That water could be decomposed by the current from a voltaic pile was discovered by Nicholson and Carlisle in 1800, a process now known as electrolysis. Their work was greatly expanded upon by Michael Faraday in 1833. Current through a resistance causes localised heating, an effect James Prescott Joule studied mathematically in 1840.[38]:23–24 One of the most important discoveries relating to current was made accidentally by Hans Christian Ørsted in 1820, when, while preparing a lecture, he witnessed the current in a wire disturbing the needle of a magnetic compass.[49] He had discovered electromagnetism, a fundamental interaction between electricity and magnetics. The level of electromagnetic emissions generated by electric arcing is high enough to produce electromagnetic interference, which can be detrimental to the workings of adjacent equipment.[50]

## best 1440p monitor

In engineering or household applications, current is often described as being either direct current (DC) or alternating current (AC). These terms refer to how the current varies in time. Direct current, as produced by example from a battery and required by most electronic devices, is a unidirectional flow from the positive part of a circuit to the negative.[51]:11 If, as is most common, this flow is carried by electrons, they will be travelling in the opposite direction. Alternating current is any current that reverses direction repeatedly; almost always this takes the form of a sine wave.[51]:206–07 Alternating current thus pulses back and forth within a conductor without the charge moving any net distance over time. The time-averaged value of an alternating current is zero, but it delivers energy in first one direction, and then the reverse. Alternating current is affected by electrical properties that are not observed under steady state direct current, such as inductance and capacitance.[51]:223–25 These properties however can become important when circuitry is subjected to transients, such as when first energised.

## best 1440p monitor

The concept of the electric field was introduced by Michael Faraday. An electric field is created by a charged body in the space that surrounds it, and results in a force exerted on any other charges placed within the field. The electric field acts between two charges in a similar manner to the way that the gravitational field acts between two masses, and like it, extends towards infinity and shows an inverse square relationship with distance.[40] However, there is an important difference. Gravity always acts in attraction, drawing two masses together, while the electric field can result in either attraction or repulsion. Since large bodies such as planets generally carry no net charge, the electric field at a distance is usually zero. Thus gravity is the dominant force at distance in the universe, despite being much weaker.[41]

best 1440p monitor

## best 1440p monitor

Field lines emanating from a positive charge above a plane conductor.

An electric field generally varies in space,[52] and its strength at any one point is defined as the force (per unit charge) that would be felt by a stationary, negligible charge if placed at that point.[19]:469–70 The conceptual charge, termed a ‘test charge‘, must be vanishingly small to prevent its own electric field disturbing the main field and must also be stationary to prevent the effect of magnetic fields. As the electric field is defined in terms of force, and force is a vector, having both magnitude and direction, so it follows that an electric field is a vector field.[19]:469–70

## best 1440p monitor

The study of electric fields created by stationary charges is called electrostatics. The field may be visualised by a set of imaginary lines whose direction at any point is the same as that of the field. This concept was introduced by Faraday,[53] whose term ‘lines of force‘ still sometimes sees use. The field lines are the paths that a point positive charge would seek to make as it was forced to move within the field; they are however an imaginary concept with no physical existence, and the field permeates all the intervening space between the lines.[53] Field lines emanating from stationary charges have several key properties: first, that they originate at positive charges and terminate at negative charges; second, that they must enter any good conductor at right angles, and third, that they may never cross nor close in on themselves.[19]:479

## best 1440p monitor

A hollow conducting body carries all its charge on its outer surface. The field is therefore zero at all places inside the body.[38]:88 This is the operating principal of the Faraday cage, a conducting metal shell which isolates its interior from outside electrical effects.

## best 1440p monitor

The principles of electrostatics are important when designing items of high-voltage equipment. There is a finite limit to the electric field strength that may be withstood by any medium. Beyond this point, electrical breakdown occurs and an electric arc causes flashover between the charged parts. Air, for example, tends to arc across small gaps at electric field strengths which exceed 30 kV per centimetre. Over larger gaps, its breakdown strength is weaker, perhaps 1 kV per centimetre.[54] The most visible natural occurrence of this is lightning, caused when charge becomes separated in the clouds by rising columns of air, and raises the electric field in the air to greater than it can withstand. The voltage of a large lightning cloud may be as high as 100 MV and have discharge energies as great as 250 kWh.[55]

## best 1440p monitor

The field strength is greatly affected by nearby conducting objects, and it is particularly intense when it is forced to curve around sharply pointed objects. This principle is exploited in the lightning conductor, the sharp spike of which acts to encourage the lightning stroke to develop there, rather than to the building it serves to protect[56]:155

### Electric potential

A pair of AA cells. The + sign indicates the polarity of the potential difference between the battery terminals.

The concept of electric potential is closely linked to that of the electric field. A small charge placed within an electric field experiences a force, and to have brought that charge to that point against the force requires work. The electric potential at any point is defined as the energy required to bring a unit test charge from an infinite distance slowly to that point. It is usually measured in volts, and one volt is the potential for which one joule of work must be expended to bring a charge of one coulomb from infinity.[19]:494–98 This definition of potential, while formal, has little practical application, and a more useful concept is that of electric potential difference, and is the energy required to move a unit charge between two specified points. An electric field has the special property that it is conservative, which means that the path taken by the test charge is irrelevant: all paths between two specified points expend the same energy, and thus a unique value for potential difference may be stated.[19]:494–98 The volt is so strongly identified as the unit of choice for measurement and description of electric potential difference that the term voltage sees greater everyday usage.

For practical purposes, it is useful to define a common reference point to which potentials may be expressed and compared. While this could be at infinity, a much more useful reference is the Earth itself, which is assumed to be at the same potential everywhere. This reference point naturally takes the name earth or ground. Earth is assumed to be an infinite source of equal amounts of positive and negative charge, and is therefore electrically uncharged—and unchargeable.[57]

Electric potential is a scalar quantity, that is, it has only magnitude and not direction. It may be viewed as analogous to height: just as a released object will fall through a difference in heights caused by a gravitational field, so a charge will ‘fall’ across the voltage caused by an electric field.[58] As relief maps show contour lines marking points of equal height, a set of lines marking points of equal potential (known as equipotentials) may be drawn around an electrostatically charged object. The equipotentials cross all lines of force at right angles. They must also lie parallel to a conductor‘s surface, otherwise this would produce a force that will move the charge carriers to even the potential of the surface.

The electric field was formally defined as the force exerted per unit charge, but the concept of potential allows for a more useful and equivalent definition: the electric field is the local gradient of the electric potential. Usually expressed in volts per metre, the vector direction of the field is the line of greatest slope of potential, and where the equipotentials lie closest together.[38]:60

### Electromagnets

Magnetic field circles around a current

Ørsted’s discovery in 1821 that a magnetic field existed around all sides of a wire carrying an electric current indicated that there was a direct relationship between electricity and magnetism. Moreover, the interaction seemed different from gravitational and electrostatic forces, the two forces of nature then known. The force on the compass needle did not direct it to or away from the current-carrying wire, but acted at right angles to it.[49] Ørsted’s words were that “the electric conflict acts in a revolving manner.” The force also depended on the direction of the current, for if the flow was reversed, then the force did too.[59]

Ørsted did not fully understand his discovery, but he observed the effect was reciprocal: a current exerts a force on a magnet, and a magnetic field exerts a force on a current. The phenomenon was further investigated by Ampère, who discovered that two parallel current-carrying wires exerted a force upon each other: two wires conducting currents in the same direction are attracted to each other, while wires containing currents in opposite directions are forced apart.[60] The interaction is mediated by the magnetic field each current produces and forms the basis for the international definition of the ampere.[60]

The electric motor exploits an important effect of electromagnetism: a current through a magnetic field experiences a force at right angles to both the field and current

This relationship between magnetic fields and currents is extremely important, for it led to Michael Faraday’s invention of the electric motor in 1821. Faraday’s homopolar motor consisted of a permanent magnet sitting in a pool of mercury. A current was allowed through a wire suspended from a pivot above the magnet and dipped into the mercury. The magnet exerted a tangential force on the wire, making it circle around the magnet for as long as the current was maintained.[61]

Experimentation by Faraday in 1831 revealed that a wire moving perpendicular to a magnetic field developed a potential difference between its ends. Further analysis of this process, known as electromagnetic induction, enabled him to state the principle, now known as Faraday’s law of induction, that the potential difference induced in a closed circuit is proportional to the rate of change of magnetic flux through the loop. Exploitation of this discovery enabled him to invent the first electrical generator in 1831, in which he converted the mechanical energy of a rotating copper disc to electrical energy.[61] Faraday’s disc was inefficient and of no use as a practical generator, but it showed the possibility of generating electric power using magnetism, a possibility that would be taken up by those that followed on from his work.

### Electrochemistry

Italian physicist Alessandro Volta showing his battery to French emperor Napoleon Bonaparte in the early 19th century.

## 2K Monitor Best of 2021

The ability of chemical reactions to produce electricity, and conversely the ability of electricity to drive chemical reactions has a wide array of uses.

Electrochemistry has always been an important part of electricity. From the initial invention of the Voltaic pile, electrochemical cells have evolved into the many different types of batteries, electroplating and electrolysis cells. Aluminium is produced in vast quantities this way, and many portable devices are electrically powered using rechargeable cells.

### Electric circuits

A basic electric circuit. The voltage source V on the left drives a current I around the circuit, delivering electrical energy into the resistor R. From the resistor, the current returns to the source, completing the circuit.

An electric circuit is an interconnection of electric components such that electric charge is made to flow along a closed path (a circuit), usually to perform some useful task.

## Best 2K Gaming Monitor

The components in an electric circuit can take many forms, which can include elements such as resistorscapacitorsswitchestransformers and electronicsElectronic circuits contain active components, usually semiconductors, and typically exhibit non-linear behaviour, requiring complex analysis. The simplest electric components are those that are termed passive and linear: while they may temporarily store energy, they contain no sources of it, and exhibit linear responses to stimuli.[62]:15–16

## Best 2K Gaming Monitor

The resistor is perhaps the simplest of passive circuit elements: as its name suggests, it resists the current through it, dissipating its energy as heat. The resistance is a consequence of the motion of charge through a conductor: in metals, for example, resistance is primarily due to collisions between electrons and ions. Ohm’s law is a basic law of circuit theory, stating that the current passing through a resistance is directly proportional to the potential difference across it. The resistance of most materials is relatively constant over a range of temperatures and currents; materials under these conditions are known as ‘ohmic’. The ohm, the unit of resistance, was named in honour of Georg Ohm, and is symbolised by the Greek letter Ω. 1 Ω is the resistance that will produce a potential difference of one volt in response to a current of one amp.[62]:30–35

## Best 2K Gaming Monitor

The capacitor is a development of the Leyden jar and is a device that can store charge, and thereby storing electrical energy in the resulting field. It consists of two conducting plates separated by a thin insulating dielectric layer; in practice, thin metal foils are coiled together, increasing the surface area per unit volume and therefore the capacitance. The unit of capacitance is the farad, named after Michael Faraday, and given the symbol F: one farad is the capacitance that develops a potential difference of one volt when it stores a charge of one coulomb. A capacitor connected to a voltage supply initially causes a current as it accumulates charge; this current will however decay in time as the capacitor fills, eventually falling to zero. A capacitor will therefore not permit a steady state current, but instead blocks it.[62]:216–20

## Best 2K Gaming Monitor

The inductor is a conductor, usually a coil of wire, that stores energy in a magnetic field in response to the current through it. When the current changes, the magnetic field does too, inducing a voltage between the ends of the conductor. The induced voltage is proportional to the time rate of change of the current. The constant of proportionality is termed the inductance. The unit of inductance is the henry, named after Joseph Henry, a contemporary of Faraday.

### Electric circuits

A basic electric circuit. The voltage source V on the left drives a current I around the circuit, delivering electrical energy into the resistor R. From the resistor, the current returns to the source, completing the circuit.

An electric circuit is an interconnection of electric components such that electric charge is made to flow along a closed path (a circuit), usually to perform some useful task.

## Best 2K Gaming Monitor

The components in an electric circuit can take many forms, which can include elements such as resistorscapacitorsswitchestransformers and electronicsElectronic circuits contain active components, usually semiconductors, and typically exhibit non-linear behaviour, requiring complex analysis. The simplest electric components are those that are termed passive and linear: while they may temporarily store energy, they contain no sources of it, and exhibit linear responses to stimuli.[62]:15–16

## Best 2K Gaming Monitor

The resistor is perhaps the simplest of passive circuit elements: as its name suggests, it resists the current through it, dissipating its energy as heat. The resistance is a consequence of the motion of charge through a conductor: in metals, for example, resistance is primarily due to collisions between electrons and ions. Ohm’s law is a basic law of circuit theory, stating that the current passing through a resistance is directly proportional to the potential difference across it. The resistance of most materials is relatively constant over a range of temperatures and currents; materials under these conditions are known as ‘ohmic’. The ohm, the unit of resistance, was named in honour of Georg Ohm, and is symbolised by the Greek letter Ω. 1 Ω is the resistance that will produce a potential difference of one volt in response to a current of one amp.[62]:30–35

## 2K Monitor Best of 2021

The capacitor is a development of the Leyden jar and is a device that can store charge, and thereby storing electrical energy in the resulting field. It consists of two conducting plates separated by a thin insulating dielectric layer; in practice, thin metal foils are coiled together, increasing the surface area per unit volume and therefore the capacitance. The unit of capacitance is the farad, named after Michael Faraday, and given the symbol F: one farad is the capacitance that develops a potential difference of one volt when it stores a charge of one coulomb. A capacitor connected to a voltage supply initially causes a current as it accumulates charge; this current will however decay in time as the capacitor fills, eventually falling to zero. A capacitor will therefore not permit a steady state current, but instead blocks it.[62]:216–20

The inductor is a conductor, usually a coil of wire, that stores energy in a magnetic field in response to the current through it. When the current changes, the magnetic field does too, inducing a voltage between the ends of the conductor. The induced voltage is proportional to the time rate of change of the current. The constant of proportionality is termed the inductance. The unit of inductance is the henry, named after Joseph Henry, a contemporary of Faraday. One henry is the inductance that will induce a potential difference of one volt if the current through it changes at a rate of one ampere per second. The inductor’s behaviour is in some regards converse to that of the capacitor: it will freely allow an unchanging current, but opposes a rapidly changing one.[62]:226–29

### Electric power

Electric power is the rate at which electric energy is transferred by an electric circuit. The SI unit of power is the watt, one joule per second.

Electric power, like mechanical power, is the rate of doing work, measured in watts, and represented by the letter P. The term wattage is used colloquially to mean “electric power in watts.” The electric power in watts produced by an electric current I consisting of a charge of Q coulombs every t seconds passing through an electric potential (voltage) difference of V is

{\displaystyle P={\text{work done per unit time}}={\frac {QV}{t}}=IV\,}

where

Q is electric charge in coulombs
t is time in seconds
I is electric current in amperes
V is electric potential or voltage in volts

Electricity generation is often done with electric generators, but can also be supplied by chemical sources such as electric batteries or by other means from a wide variety of sources of energy. Electric power is generally supplied to businesses and homes by the electric power industry. Electricity is usually sold by the kilowatt hour (3.6 MJ) which is the product of power in kilowatts multiplied by running time in hours. Electric utilities measure power using electricity meters, which keep a running total of the electric energy delivered to a customer. Unlike fossil fuels, electricity is a low entropy form of energy and can be converted into motion or many other forms of energy with high efficiency.[63]

### Electronics

Surface mount electronic components

Electronics deals with electrical circuits that involve active electrical components such as vacuum tubestransistorsdiodesoptoelectronicssensors and integrated circuits, and associated passive interconnection technologies. The nonlinear behaviour of active components and their ability to control electron flows makes amplification of weak signals possible and electronics is widely used in information processingtelecommunications, and signal processing. The ability of electronic devices to act as switches makes digital information processing possible. Interconnection technologies such as circuit boards, electronics packaging technology, and other varied forms of communication infrastructure complete circuit functionality and transform the mixed components into a regular working system.

Today, most electronic devices use semiconductor components to perform electron control. The study of semiconductor devices and related technology is considered a branch of solid state physics, whereas the design and construction of electronic circuits to solve practical problems come under electronics engineering.

### Electromagnetic wave

Faraday’s and Ampère’s work showed that a time-varying magnetic field acted as a source of an electric field, and a time-varying electric field was a source of a magnetic field. Thus, when either field is changing in time, then a field of the other is necessarily induced.[19]:696–700 Such a phenomenon has the properties of a wave, and is naturally referred to as an electromagnetic wave. Electromagnetic waves were analysed theoretically by James Clerk Maxwell in 1864. Maxwell developed a set of equations that could unambiguously describe the interrelationship between electric field, magnetic field, electric charge, and electric current. He could moreover prove that such a wave would necessarily travel at the speed of light, and thus light itself was a form of electromagnetic radiation. Maxwell’s Laws, which unify light, fields, and charge are one of the great milestones of theoretical physics.[19]:696–700

Thus, the work of many researchers enabled the use of electronics to convert signals into high frequency oscillating currents, and via suitably shaped conductors, electricity permits the transmission and reception of these signals via radio waves over very long distances.

## 2k computer monitor

### Generation and transmission

Early 20th-century alternator made in BudapestHungary, in the power generating hall of a hydroelectric station (photograph by Prokudin-Gorsky, 1905–1915).

## 2k computer monitor

In the 6th century BC, the Greek philosopher Thales of Miletus experimented with amber rods and these experiments were the first studies into the production of electrical energy. While this method, now known as the triboelectric effect, can lift light objects and generate sparks, it is extremely inefficient.[64] It was not until the invention of the voltaic pile in the eighteenth century that a viable source of electricity became available. The voltaic pile, and its modern descendant, the electrical battery, store energy chemically and make it available on demand in the form of electrical energy.[64] The battery is a versatile and very common power source which is ideally suited to many applications, but its energy storage is finite, and once discharged it must be disposed of or recharged. For large electrical demands electrical energy must be generated and transmitted continuously over conductive transmission lines.

## 2k computer monitor

Electrical power is usually generated by electro-mechanical generators driven by steam produced from fossil fuel combustion, or the heat released from nuclear reactions; or from other sources such as kinetic energy extracted from wind or flowing water. The modern steam turbine invented by Sir Charles Parsons in 1884 today generates about 80 percent of the electric power in the world using a variety of heat sources. Such generators bear no resemblance to Faraday’s homopolar disc generator of 1831, but they still rely on his electromagnetic principle that a conductor linking a changing magnetic field induces a potential difference across its ends.[65] The invention in the late nineteenth century of the transformer meant that electrical power could be transmitted more efficiently at a higher voltage but lower current. Efficient electrical transmission meant in turn that electricity could be generated at centralised power stations, where it benefited from economies of scale, and then be despatched relatively long distances to where it was needed.[66][67]

## 2k computer monitor

Wind power is of increasing importance in many countries.

## 2k computer monitor

Since electrical energy cannot easily be stored in quantities large enough to meet demands on a national scale, at all times exactly as much must be produced as is required.[66] This requires electricity utilities to make careful predictions of their electrical loads, and maintain constant co-ordination with their power stations. A certain amount of generation must always be held in reserve to cushion an electrical grid against inevitable disturbances and losses.

## 2k computer monitor

Demand for electricity grows with great rapidity as a nation modernises and its economy develops. The United States showed a 12% increase in demand during each year of the first three decades of the twentieth century,[68] a rate of growth that is now being experienced by emerging economies such as those of India or China.[69][70] Historically, the growth rate for electricity demand has outstripped that for other forms of energy.[71]:16

## 2k computer monitor

Environmental concerns with electricity generation have led to an increased focus on generation from renewable sources, in particular from wind and solar. While debate can be expected to continue over the environmental impact of different means of electricity production, its final form is relatively clean.[71]:89

### Applications

2k computer monitor

The light bulb, an early application of electricity, operates by Joule heating: the passage of current through resistance generating heat

## 2k computer monitor

Electricity is a very convenient way to transfer energy, and it has been adapted to a huge, and growing, number of uses.[72] The invention of a practical incandescent light bulb in the 1870s led to lighting becoming one of the first publicly available applications of electrical power.

## 2k computer monitor

Although electrification brought with it its own dangers, replacing the naked flames of gas lighting greatly reduced fire hazards within homes and factories.[73] Public utilities were set up in many cities targeting the burgeoning market for electrical lighting. In the late 20th century and in modern times, the trend has started to flow in the direction of deregulation in the electrical power sector.[74]

## 2k computer monitor

The resistive Joule heating effect employed in filament light bulbs also sees more direct use in electric heating. While this is versatile and controllable, it can be seen as wasteful, since most electrical generation has already required the production of heat at a power station.[75] A number of countries, such as Denmark, have issued legislation restricting or banning the use of resistive electric heating in new buildings.[76] Electricity is however still a highly practical energy source for heating and refrigeration,[77] with air conditioning/heat pumps representing a growing sector for electricity demand for heating and cooling, the effects of which electricity utilities are increasingly obliged to accommodate.[78]

## 2k computer monitor

Electricity is used within telecommunications, and indeed the electrical telegraph, demonstrated commercially in 1837 by Cooke and Wheatstone, was one of its earliest applications.

## 2K Monitor Best ofs

With the construction of first transcontinental, and then transatlantic, telegraph systems in the 1860s, electricity had enabled communications in minutes across the globe. Optical fibre and satellite communication have taken a share of the market for communications systems, but electricity can be expected to remain an essential part of the process.

## 2k computer monitor

The effects of electromagnetism are most visibly employed in the electric motor, which provides a clean and efficient means of motive power. A stationary motor such as a winch is easily provided with a supply of power, but a motor that moves with its application, such as an electric vehicle, is obliged to either carry along a power source such as a battery, or to collect current from a sliding contact such as a pantograph. Electrically powered vehicles are used in public transportation, such as electric buses and trains,[79] and an increasing number of battery-powered electric cars in private ownership.

## 2K Monitor Best ofs

Electronic devices make use of the transistor, perhaps one of the most important inventions of the twentieth century,[80] and a fundamental building block of all modern circuitry. A modern integrated circuit may contain several billion miniaturised transistors in a region only a few centimetres square.[81]

## 2K Monitor Best ofs

A voltage applied to a human body causes an electric current through the tissues, and although the relationship is non-linear, the greater the voltage, the greater the current.[82] The threshold for perception varies with the supply frequency and with the path of the current, but is about 0.1 mA to 1 mA for mains-frequency electricity, though a current as low as a microamp can be detected as an electrovibration effect under certain conditions.[83]

## 2K Monitor Best ofs

If the current is sufficiently high, it will cause muscle contraction, fibrillation of the heart, and tissue burns.[82] The lack of any visible sign that a conductor is electrified makes electricity a particular hazard. The pain caused by an electric shock can be intense, leading electricity at times to be employed as a method of torture. Death caused by an electric shock is referred to as electrocution. Electrocution is still the means of judicial execution in some jurisdictions, though its use has become rarer in recent times.[84]

### Electrical phenomena in nature

The electric eel, Electrophorus electricus

Electricity is not a human invention, and may be observed in several forms in nature, a prominent manifestation of which is lightning. Many interactions familiar at the macroscopic level, such as touchfriction or chemical bonding, are due to interactions between electric fields on the atomic scale. The Earth’s magnetic field is thought to arise from a natural dynamo of circulating currents in the planet’s core.[85] Certain crystals, such as quartz, or even sugar, generate a potential difference across their faces when subjected to external pressure.[86] This phenomenon is known as piezoelectricity, from the Greek piezein (πιέζειν), meaning to press, and was discovered in 1880 by Pierre and Jacques Curie. The effect is reciprocal, and when a piezoelectric material is subjected to an electric field, a small change in physical dimensions takes place.[86]

§Bioelectrogenesis in microbial life is a prominent phenomenon in soils and sediment ecology resulting from anaerobic respiration. The microbial fuel cell mimics this ubiquitous natural phenomenon.

Some organisms, such as sharks, are able to detect and respond to changes in electric fields, an ability known as electroreception,[87] while others, termed electrogenic, are able to generate voltages themselves to serve as a predatory or defensive weapon.[3] The order Gymnotiformes, of which the best known example is the electric eel, detect or stun their prey via high voltages generated from modified muscle cells called electrocytes.[3][4] All animals transmit information along their cell membranes with voltage pulses called action potentials, whose functions include communication by the nervous system between neurons and muscles.[88] An electric shock stimulates this system, and causes muscles to contract.[89] Action potentials are also responsible for coordinating activities in certain plants.[88]

## Cultural perception

In the 19th and early 20th century, electricity was not part of the everyday life of many people, even in the industrialised Western world. The popular culture of the time accordingly often depicted it as a mysterious, quasi-magical force that can slay the living, revive the dead or otherwise bend the laws of nature.[91] This attitude began with the 1771 experiments of Luigi Galvani in which the legs of dead frogs were shown to twitch on application of animal electricity. “Revitalization” or resuscitation of apparently dead or drowned persons was reported in the medical literature shortly after Galvani’s work. These results were known to Mary Shelley when she authored Frankenstein (1819), although she does not name the method of revitalization of the monster. The revitalization of monsters with electricity later became a stock theme in horror films.

As the public familiarity with electricity as the lifeblood of the Second Industrial Revolution grew, its wielders were more often cast in a positive light,[92] such as the workers who “finger death at their gloves’ end as they piece and repiece the living wires” in Rudyard Kipling‘s 1907 poem Sons of Martha.[92] Electrically powered vehicles of every sort featured large in adventure stories such as those of Jules Verne and the Tom Swift books.[92] The masters of electricity, whether fictional or real—including scientists such as Thomas EdisonCharles Steinmetz or Nikola Tesla—were popularly conceived of as having wizard-like powers.[92]

With electricity ceasing to be a novelty and becoming a necessity of everyday life in the later half of the 20th century, it required particular attention by popular culture only when it stops flowing,[92] an event that usually signals disaster.[92] The people who keep it flowing, such as the nameless hero of Jimmy Webb’s song “Wichita Lineman” (1968),[92] are still often cast as heroic, wizard-like figures.[92]

## Causes

Materials are made of atoms that are normally electrically neutral because they contain equal numbers of positive charges (protons in their nuclei) and negative charges (electrons in “shells” surrounding the nucleus). The phenomenon of static electricity requires a separation of positive and negative charges. When two materials are in contact, electrons may move from one material to the other, which leaves an excess of positive charge on one material, and an equal negative charge on the other. When the materials are separated they retain this charge imbalance.

Contact-induced charge separation

Styrofoam peanuts clinging to a cat’s fur due to static electricity. The triboelectric effect causes an electrostatic charge to build up on the fur due to the cat’s movement. The electric field of the charge causes polarization of the molecules of the styrofoam due to electrostatic induction, resulting in a slight attraction of the light plastic pieces to the charged fur. This effect is also the cause of static cling in clothes.

Electrons can be exchanged between materials on contact; materials with weakly bound electrons tend to lose them while materials with sparsely filled outer shells tend to gain them. This is known as the triboelectric effect and results in one material becoming positively charged and the other negatively charged. The polarity and strength of the charge on a material once they are separated depends on their relative positions in the triboelectric series. The triboelectric effect is the main cause of static electricity as observed in everyday life, and in common high-school science demonstrations involving rubbing different materials together (e.g., fur against an acrylic rod). Contact-induced charge separation causes your hair to stand up and causes “static cling” (for example, a balloon rubbed against the hair becomes negatively charged; when near a wall, the charged balloon is attracted to positively charged particles in the wall, and can “cling” to it, appearing to be suspended against gravity).

Pressure-induced charge separation

## best 1440p monitor

Applied mechanical stress generates a separation of charge in certain types of crystals and ceramics molecules.

Heat-induced charge separation

## best 1440p monitor

Heating generates a separation of charge in the atoms or molecules of certain materials. All pyroelectric materials are also piezoelectric. The atomic or molecular properties of heat and pressure response are closely related.

## best 1440p monitor

Charge-induced charge separation

## best 1440p monitor

A charged object brought close to an electrically neutral object causes a separation of charge within the neutral object. Charges of the same polarity are repelled and charges of the opposite polarity are attracted. As the force due to the interaction of electric charges falls off rapidly with increasing distance, the effect of the closer (opposite polarity) charges is greater and the two objects feel a force of attraction. The effect is most pronounced when the neutral object is an electrical conductor as the charges are more free to move around. Careful grounding of part of an object with a charge-induced charge separation can permanently add or remove electrons, leaving the object with a global, permanent charge. This process is integral to the workings of the Van de Graaff generator, a device commonly used to demonstrate the effects of static electricity.

## best 1440p monitor

Removing or preventing a buildup of static charge can be as simple as opening a window or using a humidifier to increase the moisture content of the air, making the atmosphere more conductive. Air ionizers can perform the same task.[2]

## best 1440p monitor

Items that are particularly sensitive to static discharge may be treated with the application of an antistatic agent, which adds a conducting surface layer that ensures any excess charge is evenly distributed. Fabric softeners and dryer sheets used in washing machines and clothes dryers are an example of an antistatic agent used to prevent and remove static cling.[3]

## best 1440p monitor

Many semiconductor devices used in electronics are particularly sensitive to static discharge. Conductive antistatic bags are commonly used to protect such components. People who work on circuits that contain these devices often ground themselves with a conductive antistatic strap.[4][5]

## best 1440p monitor

In the industrial settings such as paint or flour plants as well as in hospitals, antistatic safety boots are sometimes used to prevent a buildup of static charge due to contact with the floor. These shoes have soles with good conductivity. Anti-static shoes should not be confused with insulating shoes, which provide exactly the opposite benefit  – some protection against serious electric shocks from the mains voltage.[6]

## Static discharge

The spark associated with static electricity is caused by electrostatic discharge, or simply static discharge, as excess charge is neutralized by a flow of charges from or to the surroundings.

## best 1440p monitor

The feeling of an electric shock is caused by the stimulation of nerves as the neutralizing current flows through the human body. The energy stored as static electricity on an object varies depending on the size of the object and its capacitance, the voltage to which it is charged, and the dielectric constant of the surrounding medium. For modelling the effect of static discharge on sensitive electronic devices, a human being is represented as a capacitor of 100 picofarads, charged to a voltage of 4000 to 35000 volts. When touching an object this energy is discharged in less than a microsecond.[7] While the total energy is small, on the order of millijoules, it can still damage sensitive electronic devices. Larger objects will store more energy, which may be directly hazardous to human contact or which may give a spark that can ignite flammable gas or dust.

## best 1440p monitor

Natural static discharge

Lightning is a dramatic natural example of static discharge. While the details are unclear and remain a subject of debate, the initial charge separation is thought to be associated with contact between ice particles within storm clouds. In general, significant charge accumulations can only persist in regions of low electrical conductivity (very few charges free to move in the surroundings), hence the flow of neutralizing charges often results from neutral atoms and molecules in the air being torn apart to form separate positive and negative charges, which travel in opposite directions as an electric current, neutralizing the original accumulation of charge. The static charge in air typically breaks down in this way at around 10,000 volts per centimeter (10 kV/cm) depending on humidity.[8] The discharge superheats the surrounding air causing the bright flash, and produces a shock wave causing the clicking sound. The lightning bolt is simply a scaled-up version of the sparks seen in more domestic occurrences of static discharge. The flash occurs because the air in the discharge channel is heated to such a high temperature that it emits light by incandescence. The clap of thunder is the result of the shock wave created as the superheated air expands explosively.

### Electronic components

Many semiconductor devices used in electronics are very sensitive to the presence of static electricity and can be damaged by a static discharge. The use of an antistatic strap is mandatory for researchers manipulating nanodevices. Further precautions can be taken by taking off shoes with thick rubber soles and permanently staying with a metallic ground.

## best 1440p monitor

Static electricity is a major hazard when refueling aircraft.

Discharge of static electricity can create severe hazards in those industries dealing with flammable substances, where a small electrical spark might ignite explosive mixtures.[9]

## best 1440p monitor

The flowing movement of finely powdered substances or low conductivity fluids in pipes or through mechanical agitation can build up static electricity.[10] The flow of granules of material like sand down a plastic chute can transfer charge, which can be easily measured using a multimeter connected to metal foil lining the chute at intervals, and can be roughly proportional to particulate flow.[11] Dust clouds of finely powdered substances can become combustible or explosive. When there is a static discharge in a dust or vapor cloud, explosions have occurred. Among the major industrial incidents that have occurred are: a grain silo in southwest France, a paint plant in Thailand, a factory making fiberglass moldings in Canada, a storage tank explosion in Glenpool, Oklahoma in 2003, and a portable tank filling operation and a tank farm in Des Moines, Iowa and Valley Center, Kansas in 2007.[12][13][14]

The ability of a fluid to retain an electrostatic charge depends on its electrical conductivity. When low conductivity fluids flow through pipelines or are mechanically agitated, contact-induced charge separation called flow electrification occurs.[15][16] Fluids that have low electrical conductivity (below 50 picosiemens per meter), are called accumulators. Fluids having conductivity above 50 pS/m are called non-accumulators. In non-accumulators, charges recombine as fast as they are separated and hence electrostatic charge accumulation is not significant. In the petrochemical industry, 50 pS/m is the recommended minimum value of electrical conductivity for adequate removal of charge from a fluid.

Kerosines may have conductivity ranging from less than 1 picosiemens per meter to 20 pS/m. For comparison, deionized water has a conductivity of about 10,000,000 pS/m or 10 µS/m.[17]

Transformer oil is part of the electrical insulation system of large power transformers and other electrical apparatus. Re-filling of large apparatus requires precautions against electrostatic charging of the fluid, which may damage sensitive transformer insulation.

An important concept for insulating fluids is the static relaxation time. This is similar to the time constant τ (tau) within an RC circuit. For insulating materials, it is the ratio of the static dielectric constant divided by the electrical conductivity of the material. For hydrocarbon fluids, this is sometimes approximated by dividing the number 18 by the electrical conductivity of the fluid. Thus a fluid that has an electrical conductivity of 1 pS/m has an estimated relaxation time of about 18 seconds. The excess charge in a fluid dissipates almost completely after four to five times the relaxation time, or 90 seconds for the fluid in the above example.

Charge generation increases at higher fluid velocities and larger pipe diameters, becoming quite significant in pipes 8 inches (200 mm) or larger. Static charge generation in these systems is best controlled by limiting fluid velocity. The British standard BS PD CLC/TR 50404:2003 (formerly BS-5958-Part 2) Code of Practice for Control of Undesirable Static Electricity prescribes pipe flow velocity limits. Because water content has a large impact on the fluids dielectric constant, the recommended velocity for hydrocarbon fluids containing water should be limited to 1 meter per second.

Bonding and earthing are the usual ways charge buildup can be prevented. For fluids with electrical conductivity below 10 pS/m, bonding and earthing are not adequate for charge dissipation, and anti-static additives may be required.[citation needed]

One henry is the inductance that will induce a potential difference of one volt if the current through it changes at a rate of one ampere per second. The inductor’s behaviour is in some regards converse to that of the capacitor: it will freely allow an unchanging current, but opposes a rapidly changing one.[62]:226–29

2K resolution is a generic term for display devices or content having horizontal resolution of approximately 2,000 pixels.[1] Digital Cinema Initiatives (DCI) defines 2K resolution standard as 2048×1080.[2][3]

In the movie projection industry, Digital Cinema Initiatives is the dominant standard for 2K output.

Occasionally, 1080p (Full HD or FHD) has been included into the 2K resolution definition. Although 1920×1080 could be considered as having a horizontal resolution of approximately 2,000 pixels, most media, including web content and books on video production, cinema references and definitions, define 1080p and 2K resolutions as separate definitions and not the same.

Although 1080p has the same vertical resolution as DCI 2K resolutions (1080 pixels), it has a smaller horizontal resolution below the range of 2K resolution formats.[4]

According to official reference material, DCI and industry standards do not officially recognize 1080p as a 2K resolution in literature concerning 2K and 4K resolution.[2][3][5][6]

## History

The first electronic scanning format, 405 lines, was the first “high definition” television system, since the mechanical systems it replaced had far fewer. From 1939, Europe and the US tried 605 and 441 lines until, in 1941, the FCC mandated 525 for the US. In wartime France, René Barthélemy tested higher resolutions, up to 1,042. In late 1949, official French transmissions finally began with 819. In 1984, however, this standard was abandoned for 625-line color on the TF1 network.

### Analog

Modern HD specifications date to the early 1980s, when Japanese engineers developed the HighVision 1,125-line interlaced TV standard (also called MUSE) that ran at 60 frames per second. The Sony HDVS system was presented at an international meeting of television engineers in Algiers, April 1981 and Japan’s NHK presented its analog high-definition television (HDTV) system at a Swiss conference in 1983.

The NHK system was standardized in the United States as Society of Motion Picture and Television Engineers (SMPTE) standard #240M in the early 1990s, but abandoned later on when it was replaced by a DVB analog standard. HighVision video is still usable for HDTV video interchange, but there is almost no modern equipment available to perform this function. Attempts at implementing HighVision as a 6 MHz broadcast channel were mostly unsuccessful. All attempts at using this format for terrestrial TV transmission were abandoned by the mid-1990s.[citation needed]

Europe developed HD-MAC (1,250 lines, 50 Hz), a member of the MAC family of hybrid analogue/digital video standards; however, it never took off as a terrestrial video transmission format. HD-MAC was never designated for video interchange except by the European Broadcasting Union.

### Digital

High-definition digital video was not possible with uncompressed video due to impractically high memory and bandwidth requirements, with a bit-rate exceeding 1 Gbit/s for full HD video.[1] Digital HDTV was enabled by the development of discrete cosine transform (DCT) video compression.[2] The DCT is a lossy compression technique that was first proposed by Nasir Ahmed in 1972,[3] and was later adapted into a motion-compensated DCT algorithm for video coding standards such as the H.26x formats from 1988 onwards and the MPEG formats from 1993 onwards.[4][5] Motion-compensated DCT compression significantly reduced the amount of memory and bandwidth required for digital video, capable of achieving a data compression ratio of around 100:1 compared to uncompressed video.[6] By the early 1990s, DCT video compression had been widely adopted as the video coding standard for HDTV.[2]

The current high-definition video standards in North America were developed during the course of the advanced television process initiated by the Federal Communications Commission in 1987 at the request of American broadcasters. In essence, the end of the 1980s was a death knell for most analog high definition technologies that had developed up to that time.

The FCC process, led by the Advanced Television Systems Committee (ATSC) adopted a range of standards from interlaced 1,080-line video (a technical descendant of the original analog NHK 1125/30 Hz system) with a maximum frame rate of 30 Hz, (60 fields per second) and 720-line video, progressively scanned, with a maximum frame rate of 60 Hz. In the end, however, the DVB standard of resolutions (1080, 720, 480) and respective frame rates (24, 25, 30) were adopted in conjunction with the Europeans that were also involved in the same standardization process. The FCC officially adopted the ATSC transmission standard in 1996 (which included both HD and SD video standards).

In the early 2000s, it looked as if DVB would be the video standard far into the future. However, both Brazil and China have adopted alternative standards for high-definition video[citation needed] that preclude the interoperability that was hoped for after decades of largely non-interoperable analog TV broadcasting.

## Technical details

This chart shows the most common display resolutions, with the color of each resolution type indicating the display ratio (e.g., red indicates a 4:3 ratio)

High definition video (prerecorded and broadcast) is defined threefold, by:

• The number of lines in the vertical display resolution. High-definition television (HDTV) resolution is 1,080 or 720 lines. In contrast, regular digital television (DTV) is 480 lines (upon which NTSC is based, 480 visible scanlines out of 525) or 576 lines (upon which PAL/SECAM are based, 576 visible scanlines out of 625). However, since HD is broadcast digitally, its introduction sometimes coincides with the introduction of DTV. Additionally, current DVD quality is not high-definition, although the high-definition disc systems Blu-ray Disc and the HD DVD are.
• The scanning system: progressive scanning (p) or interlaced scanning (i)Progressive scanning (p) redraws an image frame (all of its lines) when refreshing each image, for example 720p/1080p. Interlaced scanning (i) draws the image field every other line or “odd-numbered” lines during the first image refresh operation, and then draws the remaining “even numbered” lines during a second refreshing, for example 1080i. Interlaced scanning yields image resolution if subject is not moving, but loses up to half of the resolution and suffers “combing” artifacts when subject is moving.
• The number of frames or fields per second (Hz). In Europe more common (50 Hz) television broadcasting system and in USA (60 Hz). The 720p60 format is 1,280 × 720 pixels, progressive encoding with 60 frames per second (60 Hz). The 1080i50/1080i60 format is 1920 × 1080 pixels, interlaced encoding with 50/60 fields, (50/60 Hz) per second. Two interlaced fields formulate a single frame, because the two fields of one frame are temporally shifted. Frame pulldown and segmented frames are special techniques that allow transmitting full frames by means of interlaced video stream.

Often, the rate is inferred from the context, usually assumed to be either 50 Hz (Europe) or 60 Hz (USA), except for 1080p, which denotes 1080p24, 1080p25, and 1080p30, but also 1080p50 and 1080p60.

## 2K Monitor Best of 2021

frame or field rate can also be specified without a resolution. For example, 24p means 24 progressive scan frames per second and 50i means 25 progressive frames per second, consisting of 50 interlaced fields per second. Most HDTV systems support some standard resolutions and frame or field rates. The most common are noted below. High-definition signals require a high-definition television or computer monitor in order to be viewed. High-definition video has an aspect ratio of 16:9 (1.78:1). The aspect ratio of regular widescreen film shot today is typically 1.85:1 or 2.39:1 (sometimes traditionally quoted at 2.35:1). Standard-definition television (SDTV) has a 4:3 (1.33:1) aspect ratio, although in recent years many broadcasters have transmitted programs “squeezed” horizontally in 16:9 anamorphic format, in hopes that the viewer has a 16:9 set which stretches the image out to normal-looking proportions, or a set which “squishes” the image vertically to present a “letterbox” view of the image, again with correct proportions.

### Common high-definition video modes

Video modeFrame size in pixels (W×H)Pixels per image1Scanning typeFrame rate (Hz)
720p (also known as HD Ready)1,280×720921,600Progressive23.976, 24, 25, 29.97, 30, 50, 59.94, 60, 72
1080i (also known as Full HD)1,920×1,0802,073,600Interlaced25 (50 fields/s), 29.97 (59.94 fields/s), 30 (60 fields/s)
1080p (also known as Full HD)1,920×1,0802,073,600Progressive24 (23.976), 25, 30 (29.97), 50, 60 (59.94)
1440p (also known as Quad HD)2,560×1,4403,686,400Progressive24 (23.976), 25, 30 (29.97), 50, 60 (59.94)

### Ultra high-definition video modes

Video modeFrame size in pixels (W×H)Pixels per image1Scanning typeFrame rate (Hz)
20002,048×1,5363,145,728Progressive24, 60
2160p (also known as 4K UHD)3,840×2,1608,294,400Progressive60, 120
2540p4,520×2,54011,480,800Progressive24, 30
4000p4,096×3,07212,582,912Progressive24, 30, 60
4320p (also known as 8K UHD)7,680×4,32033,177,600Progressive60, 120

Note: 1 Image is either a frame or, in case of interlaced scanning, two fields (EVEN and ODD).

Also, there are less common but still popular UltraWide resolutions, such as 2560×1080p (1080p UltraWide).

## HD content

High-definition image sources include terrestrial broadcast, direct broadcast satellite, digital cable, high definition disc (BD), digital cameras, Internet downloads, and video game consoles.

• Most computers are capable of HD or higher resolutions over VGADVIHDMI and/or DisplayPort.
• The optical disc standard Blu-ray Disc can provide enough digital storage to store hours of HD video content. Digital Versatile Discs or DVDs (that hold 4.7 GB for a Single layer or 8.5 GB for a double layer), are not always up to the challenge of today’s high-definition (HD) sets. Storing and playing HD movies requires a disc that holds more information, like a Blu-ray Disc (which hold 25 GB in single layer form and 50 GB for double layer) or the now-defunct High Definition Digital Versatile Discs (HD DVDs) which held 15 GB or 30 GB in, respectively, single and double layer variations.

Blu-ray Discs were jointly developed by 9 initial partners including Sony and Phillips (which jointly developed CDs for audio), and Pioneer (which developed its own Laser-disc previously with some success) among others. HD-DVD discs were primarily developed by Toshiba and NEC with some backing from Microsoft, Warner Bros., Hewlett Packard, and others. On February 19, 2008 Toshiba announced it was abandoning the format and would discontinue development, marketing and manufacturing of HD-DVD players and drives.

### Types of recorded media

The high resolution photographic film used for cinema projection is exposed at the rate of 24 frames per second but usually projected at 48, each frame getting projected twice helping to minimise flicker. One exception to this was the 1986 National Film Board of Canada short film Momentum, which briefly experimented with both filming and projecting at 48 frame/s, in a process known as IMAX HD.

Depending upon available bandwidth and the amount of detail and movement in the image, the optimum format for video transfer is either 720p24 or 1080p24. When shown on television in PAL system countries, film must be projected at the rate of 25 frames per second by accelerating it by 4.1 percent. In NTSC standard countries, the projection rate is 30 frames per second, using a technique called 3:2 pull-down. One film frame is held for three video fields (1/20 of a second), and the next is held for two video fields (1/30 of a second) and then the process is repeated, thus achieving the correct film projection rate with two film frames shown in one twelfth of a second.

Older (pre-HDTV) recordings on video tape such as Betacam SP are often either in the form 480i60 or 576i50. These may be upconverted to a higher resolution format, but removing the interlace to match the common 720p format may distort the picture or require filtering which actually reduces the resolution of the final output.

Non-cinematic HDTV video recordings are recorded in either the 720p or the 1080i format. The format used is set by the broadcaster (if for television broadcast). In general, 720p is more accurate with fast action, because it progressively scans frames, instead of the 1080i, which uses interlaced fields and thus might degrade the resolution of fast images.

## 2K Monitor Best of 2021

720p is used more for Internet distribution of high-definition video, because computer monitors progressively scan; 720p video has lower storage-decoding requirements than either the 1080i or the 1080p. This is also the medium for high-definition broadcasts around the world and 1080p is used for Blu-ray movies.

### HD in filmmaking

Film as a medium has inherent limitations, such as difficulty of viewing footage while recording, and suffers other problems, caused by poor film development/processing, or poor monitoring systems. Given that there is increasing use of computer-generated or computer-altered imagery in movies, and that editing picture sequences is often done digitally, some directors have shot their movies using the HD format via high-end digital video cameras. While the quality of HD video is very high compared to SD video, and offers improved signal/noise ratios against comparable sensitivity film, film remains able to resolve more image detail than current HD video formats. In addition some films have a wider dynamic range (ability to resolve extremes of dark and light areas in a scene) than even the best HD cameras. Thus the most persuasive arguments for the use of HD are currently cost savings on film stock and the ease of transfer to editing systems for special effects.

Depending on the year and format in which a movie was filmed, the exposed image can vary greatly in size. Sizes range from as big as 24 mm × 36 mm for VistaVision/Technirama 8 perforation cameras (same as 35 mm still photo film) going down through 18 mm × 24 mm for Silent Films or Full Frame 4 perforations cameras to as small as 9 mm × 21 mm in Academy Sound Aperture cameras modified for the Techniscope 2 perforation format. Movies are also produced using other film gauges, including 70 mm films (22 mm × 48 mm) or the rarely used 55 mm and CINERAMA.

The four major film formats provide pixel resolutions (calculated from pixels per millimeter) roughly as follows:

• Academy Sound (Sound movies before 1955): 15 mm × 21 mm (1.375) = 2,160 × 2,970
• Academy camera US Widescreen: 11 mm × 21 mm (1.85) = 1,605 × 2,970
• Current Anamorphic Panavision (“Scope”): 17.5 mm × 21 mm (2.39) = 2,485 × 2,970
• Super-35 for Anamorphic prints: 10 mm × 24 mm (2.39) = 1,420 × 3,390

In the process of making prints for exhibition, this negative is copied onto other film (negative → interpositive → internegative → print) causing the resolution to be reduced with each emulsion copying step and when the image passes through a lens (for example, on a projector). In many cases, the resolution can be reduced down to 1/6 of the original negative’s resolution (or worse).[citation needed] Note that resolution values for 70 mm film are higher than those listed above.

### HD on the World Wide Web/HD streaming

A number of online video streaming/on demand and digital download services offer HD video, among them YouTubeVimeodailymotionAmazon Video On DemandNetflix Watch InstantlyHuluHBO Max, and others. Due to heavy compression, the image detail produced by these formats is far below that of broadcast HD, and often even inferior to DVD-Video (3-9 Mbit/s MP2) upscaled to the same image size.[7] The following is a chart of numerous online services and their HD offering:

#### World Wide Web HD resolutions

SourceCodecHighest resolution (W×H)Total bit rate/bandwidthVideo bit rateAudio bit rate
Amazon Video[note 1]VC-1[8]1280×720[9]2.5-6 Mbit/s
BBC iPlayerH.264[10]1280×720[11][note 2]3.2 Mbit/s[10]3 Mbit/s[10]192 kbit/s[10]
blinkbox 1280×7202.25 Mbit/s (SD) and 4.5 Mbit/s (HD)2.25 – 4.5 Mbit/s192 kbit/s
Blockbuster Online 1280×720
CBS.com/TV.com 1920×1080[12]3.5 Mbit/s and 2.5 Mbits (720p)[12]
DacastVP6H.264[13]Unknown5 Mbit/s[14]
HuluOn2 Flash VP6[15]1280×720[16]2.5 Mbit/s[17]
iPlayerHDFLVQuickTime H.264MP4 H.264[18]1920×1080[19] 2 Mbit/s and 5 Mbit/s[20]
iTunes/Apple TVQuickTime H.264[21]1920×1080[21]
MetaCDNMPEG-4FLVOGGWebM3GP[22]No Limit[23]
Netflix Watch InstantlyVC-1[24]3840×2160[25]25 Mbit/s[26]2.6 Mbit/s and 3.8 Mbit/s (1080p)[27]
PlayStation VideoH.264/MPEG-4 AVC[28]1920×1080[28] 8 Mbit/s[28]256 kbit/s[28]
StreamSharkH.264FLVOGVWebMVP8VP9[29]1920×1080[30]
VimeoH.264[31]1920×1080[32] 4 Mbit/s[33]320 kbit/s[34]
VuduH.264[35]1920×1080[36]4.5 Mbit/s[37]
Xbox Video[note 3] 1920×1080[38]
StreamHashMp4[39]1920×1080[40]
1. ^ Formerly “Amazon Unbox”, which now refers to a video player software, and later “Amazon Video on Demand”.
2. ^ During live events “BBC iPlayer” streams have a resolution of 1024×576.
3. ^ Formerly “Xbox Live Marketplace Video Store”, but replaced by “Xbox Video” in 2012.

### HD in video surveillance

An increasing number of manufacturers of security cameras now offer HD cameras. The need for high resolution, color fidelity, and frame rate is acute for surveillance purposes to ensure that the quality of the video output is of an acceptable standard that can be used both for preventative surveillance as well as for evidence purposes. These needs, however, must be balanced against the additional storage capacity required by HD video.

### HD in video gaming

Both the PlayStation 3 game console and Xbox 360 can output native 1080p through HDMI or component cables, but the systems have few games which appear in 1080p; most games only run natively at 720p or less, but can be upscaled to 1080p. The Wii can output up to 480p (enhanced-definition) over component, which while not HD, is very useful for HDTVs as it avoids de-interlacing artifacts. The Wii can also output 576i and 576p in PAL regions.

Visually, native 1080p produces a sharper and clearer picture compared to upscaled 1080p. Though only a handful of games available have the native resolution of 1080p, all games on the Xbox 360 and PlayStation 3 can be upscaled up to this resolution. Xbox 360 and PlayStation 3 games are labeled with the output resolution on the back of their packaging, although on Xbox 360 this indicates the resolution it will upscale to, not the native resolution of the game.

Generally, PC games are only limited by the display’s resolution size. Drivers are capable of supporting very high resolutions, depending on the chipset of the video card. Many game engines support resolutions of 5760×1080 or 5760×1200 (typically achieved with three 1080p displays in a multi-monitor setup) and nearly all will display 1080p at minimum. 1440p and 4K are typically supported resolutions for PC gaming as well.

High-definition video (HDTV Video or HD video) is video of higher resolution and quality than standard-definition. While there is no standardized meaning for high-definition, generally any video image with considerably more than 480 vertical scan lines (North America) or 576 vertical lines (Europe) is considered high-definition. 480 scan lines is generally the minimum even though the majority of systems greatly exceed that. Images of standard resolution captured at rates faster than normal (60 frames/second North America, 50 fps Europe), by a high-speed camera may be considered high-definition in some contexts. Some television series shot on high-definition video are made to look as if they have been shot on film, a technique which is often known as filmizing.

Video is an electronic medium for the recording, copying, playback, broadcasting, and display of moving visual media.[1] Video was first developed for mechanical television systems, which were quickly replaced by cathode ray tube (CRT) systems which were later replaced by flat panel displays of several types.

Video systems vary in display resolutionaspect ratiorefresh rate, color capabilities and other qualities. Analog and digital variants exist and can be carried on a variety of media, including radio broadcastmagnetic tapeoptical discscomputer files, and network streaming.

## History

### Analog video

Video technology was first developed for mechanical television systems, which were quickly replaced by cathode ray tube (CRT) television systems, but several new technologies for video display devices have since been invented. Video was originally exclusively a live technology. Charles Ginsburg led an Ampex research team developing one of the first practical video tape recorder (VTR). In 1951 the first video tape recorder captured live images from television cameras by converting the camera’s electrical impulses and saving the information onto magnetic video tape.

Video recorders were sold for US$50,000 in 1956, and videotapes cost US$300 per one-hour reel.[2] However, prices gradually dropped over the years; in 1971, Sony began selling videocassette recorder (VCR) decks and tapes into the consumer market.[3]

### Digital video

The use of digital techniques in video created digital video. It could not initially compete with analog video, due to early digital uncompressed video requiring impractically high bitrates. Practical digital video was made possible with discrete cosine transform (DCT) coding,[4] a lossy compression process developed in the early 1970s.[5][6][7] DCT coding was adapted into motion-compensated DCT video compression in the late 1980s, starting with H.261,[4] the first practical digital video coding standard.[8]

Digital video was later capable of higher quality and, eventually, much lower cost than earlier analog technology. After the invention of the DVD in 1997, and later the Blu-ray Disc in 2006, sales of videotape and recording equipment plummeted. Advances in computer technology allows even inexpensive personal computers and smartphones to capture, store, edit and transmit digital video, further reducing the cost of video production, allowing program-makers and broadcasters to move to tapeless production. The advent of digital broadcasting and the subsequent digital television transition is in the process of relegating analog video to the status of a legacy technology in most parts of the world. As of 2015, with the increasing use of high-resolution video cameras with improved dynamic range and color gamuts, and high-dynamic-range digital intermediate data formats with improved color depth, modern digital video technology is converging with digital film technology.

## Characteristics of video streams

### Number of frames per second

Frame rate, the number of still pictures per unit of time of video, ranges from six or eight frames per second (frame/s) for old mechanical cameras to 120 or more frames per second for new professional cameras. PAL standards (Europe, Asia, Australia, etc.) and SECAM (France, Russia, parts of Africa etc.) specify 25 frame/s, while NTSC standards (USA, Canada, Japan, etc.) specify 29.97 frame/s.[9] Film is shot at the slower frame rate of 24 frames per second, which slightly complicates the process of transferring a cinematic motion picture to video. The minimum frame rate to achieve a comfortable illusion of a moving image is about sixteen frames per second.[10]

### Interlaced vs progressive

Video can be interlaced or progressive. In progressive scan systems, each refresh period updates all scan lines in each frame in sequence. When displaying a natively progressive broadcast or recorded signal, the result is optimum spatial resolution of both the stationary and moving parts of the image. Interlacing was invented as a way to reduce flicker in early mechanical and CRT video displays without increasing the number of complete frames per second. Interlacing retains detail while requiring lower bandwidth compared to progressive scanning.

In interlaced video, the horizontal scan lines of each complete frame are treated as if numbered consecutively, and captured as two fields: an odd field (upper field) consisting of the odd-numbered lines and an even field (lower field) consisting of the even-numbered lines. Analog display devices reproduce each frame, effectively doubling the frame rate as far as perceptible overall flicker is concerned. When the image capture device acquires the fields one at a time, rather than dividing up a complete frame after it is captured, the frame rate for motion is effectively doubled as well, resulting in smoother, more lifelike reproduction of rapidly moving parts of the image when viewed on an interlaced CRT display.

## 2K Monitor Best of 2021

NTSC, PAL and SECAM are interlaced formats. Abbreviated video resolution specifications often include an i to indicate interlacing. For example, PAL video format is often described as 576i50, where 576 indicates the total number of horizontal scan lines, i indicates interlacing, and 50 indicates 50 fields (half-frames) per second.

## 2K Monitor Best of 2021

When displaying a natively interlaced signal on a progressive scan device, overall spatial resolution is degraded by simple line doubling—artifacts such as flickering or “comb” effects in moving parts of the image which appear unless special signal processing eliminates them. A procedure known as deinterlacing can optimize the display of an interlaced video signal from an analog, DVD or satellite source on a progressive scan device such as an LCD television, digital video projector or plasma panel. Deinterlacing cannot, however, produce video quality that is equivalent to true progressive scan source material.

## 2K Monitor Best of 2021

Comparison of common cinematography and traditional television (green) aspect ratios

Aspect ratio describes the proportional relationship between the width and height of video screens and video picture elements. All popular video formats are rectangular, and so can be described by a ratio between width and height. The ratio width to height for a traditional television screen is 4:3, or about 1.33:1. High definition televisions use an aspect ratio of 16:9, or about 1.78:1. The aspect ratio of a full 35 mm film frame with soundtrack (also known as the Academy ratio) is 1.375:1.

Pixels on computer monitors are usually square, but pixels used in digital video often have non-square aspect ratios, such as those used in the PAL and NTSC variants of the CCIR 601 digital video standard, and the corresponding anamorphic widescreen formats. The 720 by 480 pixel raster uses thin pixels on a 4:3 aspect ratio display and fat pixels on a 16:9 display.

The popularity of viewing video on mobile phones has led to the growth of vertical video. Mary Meeker, a partner at Silicon Valley venture capital firm Kleiner Perkins Caufield & Byers, highlighted the growth of vertical video viewing in her 2015 Internet Trends Report – growing from 5% of video viewing in 2010 to 29% in 2015. Vertical video ads like Snapchat’s are watched in their entirety nine times more frequently than landscape video ads.[11]

### Color model and depth

Example of U-V color plane, Y value=0.5

The color model the video color representation and maps encoded color values to visible colors reproduced by the system. There are several such representations in common use: typically YIQ is used in NTSC television, YUV is used in PAL television, YDbDr is used by SECAM television and YCbCr is used for digital video.

The number of distinct colors a pixel can represent depends on color depth expressed in the number of bits per pixel. A common way to reduce the amount of data required in digital video is by chroma subsampling (e.g., 4:4:4, 4:2:2, etc.). Because the human eye is less sensitive to details in color than brightness, the luminance data for all pixels is maintained, while the chrominance data is averaged for a number of pixels in a block and that same value is used for all of them. For example, this results in a 50% reduction in chrominance data using 2 pixel blocks (4:2:2) or 75% using 4 pixel blocks (4:2:0). This process does not reduce the number of possible color values that can be displayed, but it reduces the number of distinct points at which the color changes.

### Video quality

Video quality can be measured with formal metrics like Peak signal-to-noise ratio (PSNR) or through subjective video quality assessment using expert observation. Many subjective video quality methods are described in the ITU-T recommendation BT.500. One of the standardized methods is the Double Stimulus Impairment Scale (DSIS). In DSIS, each expert views an unimpaired reference video followed by an impaired version of the same video. The expert then rates the impaired video using a scale ranging from “impairments are imperceptible” to “impairments are very annoying”.

### Video compression method (digital only)

Uncompressed video delivers maximum quality, but with a very high data rate. A variety of methods are used to compress video streams, with the most effective ones using a group of pictures (GOP) to reduce spatial and temporal redundancy. Broadly speaking, spatial redundancy is reduced by registering differences between parts of a single frame; this task is known as intraframe compression and is closely related to image compression. Likewise, temporal redundancy can be reduced by registering differences between frames; this task is known as interframe compression, including motion compensation and other techniques. The most common modern compression standards are MPEG-2, used for DVDBlu-ray and satellite television, and MPEG-4, used for AVCHD, Mobile phones (3GP) and Internet.

### Stereoscopic

Stereoscopic video for 3d film and other applications can be displayed using several different methods:

• Two channels: a right channel for the right eye and a left channel for the left eye. Both channels may be viewed simultaneously by using light-polarizing filters 90 degrees off-axis from each other on two video projectors. These separately polarized channels are viewed wearing eyeglasses with matching polarization filters.
• Anaglyph 3D where one channel is overlaid with two color-coded layers. This left and right layer technique is occasionally used for network broadcast, or recent anaglyph releases of 3D movies on DVD. Simple red/cyan plastic glasses provide the means to view the images discretely to form a stereoscopic view of the content.
• One channel with alternating left and right frames for the corresponding eye, using LCD shutter glasses that synchronize to the video to alternately block the image to each eye, so the appropriate eye sees the correct frame. This method is most common in computer virtual reality applications such as in a Cave Automatic Virtual Environment, but reduces effective video framerate by a factor of two.

## Formats

Different layers of video transmission and storage each provide their own set of formats to choose from.

For transmission, there is a physical connector and signal protocol (see List of video connectors). A given physical link can carry certain display standards that specify a particular refresh rate, display resolution, and color space.

Many analog and digital recording formats are in use, and digital video clips can also be stored on a computer file system as files, which have their own formats. In addition to the physical format used by the data storage device or transmission medium, the stream of ones and zeros that is sent must be in a particular digital video coding format, of which a number are available (see List of video coding formats).

### Analog video

Analog video is a video signal represented by one or more analog signals. Analog color video signals include luminance, brightness (Y) and chrominance (C). When combined into one channel, as is the case, among others with NTSCPAL and SECAM it is called composite video. Analog video may be carried in separate channels, as in two channel S-Video (YC) and multi-channel component video formats.

Analog video is used in both consumer and professional television production applications.

### Digital video

Digital video signal formats have been adopted, including serial digital interface (SDI), Digital Visual Interface (DVI), High-Definition Multimedia Interface (HDMI) and DisplayPort Interface.

## 2K Monitor Best of 2021

Video can be transmitted or transported in a variety of ways including wireless terrestrial television as an analog or digital signal, coaxial cable in a closed circuit system as an analog signal. Broadcast or studio cameras use a single or dual coaxial cable system using serial digital interface (SDI). See List of video connectors for information about physical connectors and related signal standards.

Video may be transported over networks and other shared digital communications links using, for instance, MPEG transport streamSMPTE 2022 and SMPTE 2110.

## Display standards

### Digital television

Digital television broadcasts use the MPEG-2 and other video coding formats and include:

### Analog television

An analog video format consists of more information than the visible content of the frame. Preceding and following the image are lines and pixels containing metadata and synchronization information. This surrounding margin is known as a blanking interval or blanking region; the horizontal and vertical front porch and back porch are the building blocks of the blanking interval.

### Computer displays

Computer display standards specify a combination of aspect ratio, display size, display resolution, color depth, and refresh rate. A list of common resolutions is available.

Electronics comprises the physics, engineering, technology and applications that deal with the emission, flow and control of electrons in vacuum and matter.[1] It uses active devices to control electron flow by amplification and rectification, which distinguishes it from classical electrical engineering which uses passive effects such as resistancecapacitance and inductance to control current flow.

Electronics has had a major effect on the development of modern society. The identification of the electron in 1897, along with the subsequent invention of the vacuum tube which could amplify and rectify small electrical signals, inaugurated the field of electronics and the electron age.[2] This distinction started around 1906 with the invention by Lee De Forest of the triode, which made electrical amplification of weak radio signals and audio signals possible with a non-mechanical device. Until 1950, this field was called “radio technology” because its principal application was the design and theory of radio transmittersreceivers, and vacuum tubes.

## 2K Monitor Best of 2021

The term “solid-state electronics” emerged after the first working transistor was invented by William ShockleyWalter Houser Brattain and John Bardeen at Bell Labs in 1947. The MOSFET (MOS transistor) was later invented by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959. The MOSFET was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses, revolutionizing the electronics industry, and playing a central role in the microelectronics revolution and Digital Revolution. The MOSFET has since become the basic element in most modern electronic equipment, and is the most widely used electronic device in the world.

Electronics is widely used in information processingtelecommunication, and signal processing. The ability of electronic devices to act as switches makes digital information-processing possible. Interconnection technologies such as circuit boards, electronics packaging technology, and other varied forms of communication infrastructure complete circuit functionality and transform the mixed electronic components into a regular working system, called an electronic system; examples are computers or control systems. An electronic system may be a component of another engineered system or a standalone device. As of 2019 most electronic devices[3] use semiconductor components to perform electron control. Commonly, electronic devices contain circuitry consisting of active semiconductors supplemented with passive elements; such a circuit is described as an electronic circuit. Electronics deals with electrical circuits that involve active electrical components such as vacuum tubes, transistorsdiodesintegrated circuitsoptoelectronics, and sensors, associated passive electrical components, and interconnection technologies. The nonlinear behaviour of active components and their ability to control electron flows makes amplification of weak signals possible.

The study of semiconductor devices and related technology is considered a branch of solid-state physics, whereas the design and construction of electronic circuits to solve practical problems come under electronics engineering. This article focuses on engineering aspects of electronics.

## Electronic devices and components

2K Monitor Best of 2021

One of the earliest Audion radio receivers, constructed by De Forest in 1914.

2K Monitor Best of 2021

Electronics Technician performing a voltage check on a power circuit card in the air navigation equipment room aboard the aircraft carrier USS Abraham Lincoln (CVN-72).

An electronic component is any physical entity in an electronic system used to affect the electrons or their associated fields in a manner consistent with the intended function of the electronic system. Components are generally intended to be connected together, usually by being soldered to a printed circuit board (PCB), to create an electronic circuit with a particular function (for example an amplifierradio receiver, or oscillator). Components may be packaged singly, or in more complex groups as integrated circuits. Some common electronic components are capacitorsinductorsresistorsdiodestransistors, etc. Components are often categorized as active (e.g. transistors and thyristors) or passive (e.g. resistors, diodes, inductors and capacitors).[4]

## 2K Monitor Best of 2021

Vacuum tubes (Thermionic valves) were among the earliest electronic components.[5] They were almost solely responsible for the electronics revolution of the first half of the twentieth century.[6][7] They allowed for vastly more complicated systems and gave us radio, television, phonographs, radar, long-distance telephony and much more. They played a leading role in the field of microwave and high power transmission as well as television receivers until the middle of the 1980s.[8] Since that time, solid-state devices have all but completely taken over. Vacuum tubes are still used in some specialist applications such as high power RF amplifierscathode ray tubes, specialist audio equipment, guitar amplifiers and some microwave devices.

The first working point-contact transistor was invented by John Bardeen and Walter Houser Brattain at Bell Labs in 1947.[9] In April 1955, the IBM 608 was the first IBM product to use transistor circuits without any vacuum tubes and is believed to be the first all-transistorized calculator to be manufactured for the commercial market.[10][11] The 608 contained more than 3,000 germanium transistors. Thomas J. Watson Jr. ordered all future IBM products to use transistors in their design. From that time on transistors were almost exclusively used for computer logic and peripherals. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialised applications.[12]

The MOSFET (MOS transistor) was invented by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959.[13][14][15][16] The MOSFET was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses.[12] Its advantages include high scalability,[17] affordability,[18] low power consumption, and high density.[19] It revolutionized the electronics industry,[20][21] becoming the most widely used electronic device in the world.[15][22] The MOSFET is the basic element in most modern electronic equipment,[23][24] and has been central to the electronics revolution,[25] the microelectronics revolution,[26] and the Digital Revolution.[16][27][28] The MOSFET has thus been credited as the birth of modern electronics,[29][30] and possibly the most important invention in electronics.[31]

## Types of circuits

Circuits and components can be divided into two groups: analog and digital. A particular device may consist of circuitry that has one or the other or a mix of the two types.

### Analog circuits

2K Monitor Best of 2021

Hitachi J100 adjustable frequency drive chassis

Most analog electronic appliances, such as radio receivers, are constructed from combinations of a few types of basic circuits. Analog circuits use a continuous range of voltage or current as opposed to discrete levels as in digital circuits.

The number of different analog circuits so far devised is huge, especially because a ‘circuit’ can be defined as anything from a single component, to systems containing thousands of components.

Analog circuits are sometimes called linear circuits although many non-linear effects are used in analog circuits such as mixers, modulators, etc. Good examples of analog circuits include vacuum tube and transistor amplifiers, operational amplifiers and oscillators.

One rarely finds modern circuits that are entirely analog. These days analog circuitry may use digital or even microprocessor techniques to improve performance. This type of circuit is usually called “mixed signal” rather than analog or digital.

## 2K Monitor Best of 2021

Sometimes it may be difficult to differentiate between analog and digital circuits as they have elements of both linear and non-linear operation. An example is the comparator which takes in a continuous range of voltage but only outputs one of two levels as in a digital circuit. Similarly, an overdriven transistor amplifier can take on the characteristics of a controlled switch having essentially two levels of output. In fact, many digital circuits are actually implemented as variations of analog circuits similar to this example – after all, all aspects of the real physical world are essentially analog, so digital effects are only realized by constraining analog behavior.

## 2K Monitor Best of 2021

Digital circuits are electric circuits based on a number of discrete voltage levels. Digital circuits are the most common physical representation of Boolean algebra, and are the basis of all digital computers. To most engineers, the terms “digital circuit”, “digital system” and “logic” are interchangeable in the context of digital circuits. Most digital circuits use a binary system with two voltage levels labeled “0” and “1”. Often logic “0” will be a lower voltage and referred to as “Low” while logic “1” is referred to as “High”. However, some systems use the reverse definition (“0” is “High”) or are current based. Quite often the logic designer may reverse these definitions from one circuit to the next as he sees fit to facilitate his design. The definition of the levels as “0” or “1” is arbitrary.

Ternary (with three states) logic has been studied, and some prototype computers made.

Computers, electronic clocks, and programmable logic controllers (used to control industrial processes) are constructed of digital circuits. Digital signal processors are another example.

Building blocks:

Highly integrated devices:

## 2K Monitor Best of 2021

Heat generated by electronic circuitry must be dissipated to prevent immediate failure and improve long term reliability. Heat dissipation is mostly achieved by passive conduction/convection. Means to achieve greater dissipation include heat sinks and fans for air cooling, and other forms of computer cooling such as water cooling. These techniques use convectionconduction, and radiation of heat energy.

## Noise

Electronic noise is defined[32] as unwanted disturbances superposed on a useful signal that tend to obscure its information content. Noise is not the same as signal distortion caused by a circuit. Noise is associated with all electronic circuits. Noise may be electromagnetically or thermally generated, which can be decreased by lowering the operating temperature of the circuit. Other types of noise, such as shot noise cannot be removed as they are due to limitations in physical properties.

## Electronics theory

Mathematical methods are integral to the study of electronics. To become proficient in electronics it is also necessary to become proficient in the mathematics of circuit analysis.

Circuit analysis is the study of methods of solving generally linear systems for unknown variables such as the voltage at a certain node or the current through a certain branch of a network. A common analytical tool for this is the SPICE circuit simulator.

Also important to electronics is the study and understanding of electromagnetic field theory.

## 2K Monitor Best of 2021

Due to the complex nature of electronics theory, laboratory experimentation is an important part of the development of electronic devices. These experiments are used to test or verify the engineer’s design and detect errors. Historically, electronics labs have consisted of electronics devices and equipment located in a physical space, although in more recent years the trend has been towards electronics lab simulation software, such as CircuitLogixMultisim, and PSpice.

Today’s electronics engineers have the ability to design circuits using premanufactured building blocks such as power suppliessemiconductors (i.e. semiconductor devices, such as transistors), and integrated circuitsElectronic design automation software programs include schematic capture programs and printed circuit board design programs. Popular names in the EDA software world are NI Multisim, Cadence (ORCAD), EAGLE PCB and Schematic, Mentor (PADS PCB and LOGIC Schematic), Altium (Protel), LabCentre Electronics (Proteus), gEDA, KiCad and many others.

## Packaging methods

Many different methods of connecting components have been used over the years. For instance, early electronics often used point to point wiring with components attached to wooden breadboards to construct circuits. Cordwood construction and wire wrap were other methods used. Most modern day electronics now use printed circuit boards made of materials such as FR4, or the cheaper (and less hard-wearing) Synthetic Resin Bonded Paper (SRBP, also known as Paxoline/Paxolin (trade marks) and FR2) – characterised by its brown colour. Health and environmental concerns associated with electronics assembly have gained increased attention in recent years, especially for products destined to the European Union, with its Restriction of Hazardous Substances Directive (RoHS) and Waste Electrical and Electronic Equipment Directive (WEEE), which went into force in July 2006.

## Electronic systems design

Electronic systems design deals with the multi-disciplinary design issues of complex electronic devices and systems, such as mobile phones and computers. The subject covers a broad spectrum, from the design and development of an electronic system (new product development) to assuring its proper function, service life and disposal.[33] Electronic systems design is therefore the process of defining and developing complex electronic devices to satisfy specified requirements of the user.

## Mounting Options

Electrical components are generally mounted in the following ways:

## Electronics industry

The electronics industry consists of various sectors. The central driving force behind the entire electronics industry is the semiconductor industry sector,[34] which has annual sales of over $481 billion as of 2018.[35] The largest industry sector is e-commerce, which generated over$29 trillion in 2017.[36] The most widely manufactured electronic device is the metal-oxide-semiconductor field-effect transistor (MOSFET), with an estimated 13 sextillion MOSFETs having been manufactured between 1960 and 2018.[37]

Consumer electronics or home electronics are electronic (analog or digital) equipment intended for everyday use, typically in private homes. Consumer electronics include devices used for entertainmentcommunications and recreation. In British English, they are often called brown goods by producers and sellers, to distinguish them from “white goods” which are meant for housekeeping tasks, such as washing machines and refrigerators, although nowadays, these would be considered brown goods, some of these being connected to the Internet.[1][n 1] In the 2010s, this distinction is absent in large big box consumer electronics stores, which sell both entertainment, communication, and home office devices and kitchen appliances such as refrigerators. The highest selling consumer electronics products are Compact discs[3]

## 2K Monitor Best of 2021

Radio broadcasting in the early 20th century brought the first major consumer product, the broadcast receiver. Later products included telephonestelevisions and calculators, then audio and video recorders and players, game consolespersonal computers and MP3 players. In the 2010s, consumer electronics stores often sell GPSautomotive electronics (car stereos), video game consoleselectronic musical instruments (e.g., synthesizer keyboards), karaoke machinesdigital cameras, and video players (VCRs in the 1980s and 1990s, followed by DVD players and Blu-ray players). Stores also sell smart appliancesdigital camerascamcorderscell phones, and smartphones. Some of the newer products sold include virtual reality head-mounted display goggles, smart home devices that connect home devices to the Internet and wearable technology.

## 2K Monitor Best of 2021

In the 2010s, most consumer electronics have become based on digital technologies, and have largely merged with the computer industry in what is increasingly referred to as the consumerization of information technology. Some consumer electronics stores, have also begun selling office and baby furniture. Consumer electronics stores may be “brick and mortar” physical retail stores, online stores, or combinations of both.

Annual consumer electronics sales are expected to reach $2.9 trillion by 2021.[4] It is part of the wider electronics industry. In turn, the driving force behind the electronics industry is the semiconductor industry.[5] The basic building block of modern electronics is the MOSFET (metal-oxide-silicon field-effect transistor, or MOS transistor),[6][7] the scaling and miniaturization of which has been the primary factor behind the rapid exponential growth of electronic technology since the 1960s.[8] ## History 2K Monitor Best of 2021 A radio and TV store in 1961 For its first fifty years the phonograph turntable did not use electronics; the needle and soundhorn were purely mechanical technologies. However, in the 1920s radio broadcasting became the basis of mass production of radio receivers. The vacuum tubes that had made radios practical were used with record players as well, to amplify the sound so that it could be played through a loudspeakerTelevision was soon invented, but remained insignificant in the consumer market until the 1950s. ## 2K Monitor Best of 2021 The first working transistor, a point-contact transistor, was invented by John Bardeen and Walter Houser Brattain at Bell Laboratories in 1947, which led to significant research in the field of solid-state semiconductors in the early 1950s.[9] The invention and development of the earliest transistors at Bell led to transistor radios. This led to the emergence of the home entertainment consumer electronics industry starting in the 1950s, largely due to the efforts of Tokyo Tsushin Kogyo (now Sony) in successfully commercializing transistor technology for a mass market, with affordable transistor radios and then transistorized television sets.[10] ## 2K Monitor Best of 2021 Mohamed M. Atalla‘s surface passivation process, developed at Bell in 1957, led to the planar process and planar transistor developed by Jean Hoerni at Fairchild Semiconductor in 1959,[11] from which comes the origins of Moore’s law,[12] and the invention of the MOSFET (metal–oxide–silicon field-effect transistor, or MOS transistor) by Mohamed Atalla and Dawon Kahng at Bell in 1959.[6][7][13] The MOSFET was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses,[14] enabling Moore’s law[15] and revolutionizing the electronics industry.[16][17] It has since been the building block of modern digital electronics,[13][18] and the “workhorse” of the electronics industry.[19] Integrated circuits (ICs) followed when manufacturers built circuits (usually for military purposes) on a single substrate using electrical connections between circuits within the chip itself. The most common type of IC is the MOS integrated circuit chip, capable of the large-scale integration (LSI) of MOSFETs on an IC chip. MOS technology led to more advanced and cheaper consumer electronics, such as transistorized televisions, pocket calculators, and by the 1980s, affordable video game consoles and personal computers that regular middle-class families could buy. The rapid progress of the electronics industry during the late 20th to early 21st centuries was achieved by rapid MOSFET scaling (related to Dennard scaling and Moore’s law), down to sub-micron levels and then nanoelectronics in the early 21st century.[20] The MOSFET is the most widely manufactured device in history, with an estimated total of 13 sextillion MOSFETs manufactured between 1960 and 2018.[21][22] ## Products 2K Monitor Best of 2021 A typical CoCo 3 computer system, from the 1980s ## 2K Monitor Best of 2021 Consumer electronics devices include those used for [23] Increasingly consumer electronics products such as Digital distribution of video games have become based on internet and digital technologies. Consumer electronics industry has largely merged with the software industry in what is increasingly referred to as the consumerization of information technology. List of top consumer electronics products by number of shipments Electronic deviceShipments (est. billion units) Production years includedRef Compact disc (CD)2001982–2007[24] Audio cassette tape301963–2019[25] Digital versatile disc (DVD)201996–2012[26] Mobile phone19.41994–2018[b] Smartphone10.12007–2018[a] Video cassette101976–2000[30][31] ### Trends 2K Monitor Best of 2021 A modern flat panel, HDTV television set ## 2K Monitor Best of 2021 One overriding characteristic of consumer electronic products is the trend of ever-falling prices. This is driven by gains in manufacturing efficiency and automation, lower labor costs as manufacturing has moved to lower-wage countries, and improvements in semiconductor design.[32] Semiconductor components benefit from Moore’s law, an observed principle which states that, for a given price, semiconductor functionality doubles every two years. While consumer electronics continues in its trend of convergence, combining elements of many products, consumers face different decisions when purchasing. There is an ever-increasing need to keep product information updated and comparable, for the consumer to make an informed choice. Style, price, specification, and performance are all relevant. There is a gradual shift towards e-commerce web-storefronts. Many products include Internet connectivity using technologies such as Wi-FiBluetoothEDGE or Ethernet. Products not traditionally associated with computer use (such as TVs or Hi-Fi equipment) now provide options to connect to the Internet or to a computer using a home network to provide access to digital content. The desire for high-definition (HD) content has led the industry to develop a number of technologies, such as WirelessHD or ITU-T G.hn, which are optimized for distribution of HD content between consumer electronic devices in a home. ## Industries The electronics industry, especially meaning consumer electronics, emerged in the 20th century and has now become a global industry worth billions of dollars. Contemporary society uses all manner of electronic devices built in automated or semi-automated factories operated by the industry. ## 2K Monitor Best of 2021 2K Monitor Best of 2021 Most consumer electronics are built in China, due to maintenance cost, availability of materials, quality, and speed as opposed to other countries such as the United States.[33] Cities such as Shenzhen have become important production centres for the industry, attracting many consumer electronics companies such as Apple Inc.[34] #### Electronic component An electronic component is any basic discrete device or physical entity in an electronic system used to affect electrons or their associated fields. Electronic components are mostly industrial products, available in a singular form and are not to be confused with electrical elements, which are conceptual abstractions representing idealized electronic components. #### Software development ## 2K Monitor Best of 2021 Consumer electronics such as personal computers use various types of software. Embedded software is used within some consumer electronics, such as mobile phones.[35] This type of software may be embedded within the hardware of electronic devices.[36] Some consumer electronics include software that is used on a personal computer in conjunction with electronic devices, such as camcorders and digital cameras, and third-party software for such devices also exists. #### Standardization Some consumer electronics adhere to protocols, such as connection protocols “to high speed bi-directional signals”.[37] In telecommunications, a communications protocol is a system of digital rules for data exchange within or between computers. ### Trade shows The Consumer Electronics Show (CES) trade show has taken place yearly in Las VegasNevada since its foundation in 1973. The event, which grew from having 100 exhibitors in its inaugural year to more than 4,500 exhibiting companies in its 2021 edition, features the latest in consumer electronics, speeches by industry experts and innovation awards.[38] The Internationale Funkausstellung Berlin (IFA) trade show has taken place BerlinGermany since its foundation in 1924. The event features new consumer electronics and speeches by industry pioneers. ### IEEE initiatives Institute of Electrical and Electronics Engineers (IEEE), the world’s largest professional society, has many initiatives to advance the state of the art of consumer electronics. IEEE has a dedicated society of thousands of professionals to promote CE, called the Consumer Electronics Society (CESoc).[39] IEEE has multiple periodicals and international conferences to promote CE and encourage collaborative research and development in CE. The flagship conference of CESoc, called IEEE International Conference on Consumer Electronics (ICCE), is on its 35th year. • IEEE Transactions on Consumer Electronics[40] • IEEE Consumer Electronics Magazine[41] • IEEE International Conference on Consumer Electronics (ICCE)[42] ### Retailing ## 2K Monitor Best of 2021 Electronics retailing is a significant part of the retail industry in many countries. In the United States, dedicated consumer electronics stores have mostly given way to big-box retailers such as Best Buy, the largest consumer electronics retailer in the country,[43] although smaller dedicated stores include Apple Stores, and specialist stores that serve, for example, audiophiles and exceptions, such as the single-branch B&H Photo store in New York City. Broad-based retailers, such as Wal-Mart and Target, also sell consumer electronics in many of their stores.[43] In April 2014, retail e-commerce sales were the highest in the consumer electronic and computer categories as well.[44] Some consumer electronics retailers offer extended warranties on products with programs such as SquareTrade.[45] An electronics district is an area of commerce with a high density of retail stores that sell consumer electronics.[46] ### Service and repair Consumer electronic service can refer to the maintenance of said products. When consumer electronics have malfunctions, they may sometimes be repaired. In 2013 in Pittsburgh, Pennsylvania, the increased popularity in listening to sound from analog audio devices, such as record players, as opposed to digital sound, has sparked a noticeable increase of business for the electronic repair industry there.[47] ### Mobile phone industry This picture illustrates how the mobile phone industry evolved to what we see today as modern smartphones #### By country ## 2K Monitor Best of 2021 ### Rare metals and rare earth elements Electronic devices use thousands rare metals and rare earth elements (40 on average for a smartphone), these material are extracted and refined using water and energy-intensive processes. These metals are also used in the renewable energy industry meaning that consumer electronics are directly competing for the raw materials.[48][49] ### Energy consumption The energy consumption of consumer electronics and their environmental impact, either from their production processes or the disposal of the devices, is increasing steadily. EIA estimates that electronic devices and gadgets account for about 10%–15% of the energy use in American homes – largely because of their number; the average house has dozens of electronic devices.[50] The energy consumption of consumer electronics increases – in America and Europe – to about 50% of household consumption, if the term is redefined to include home appliances such as refrigeratorsdryersclothes washers and dishwashers. ### Standby power Standby power – used by consumer electronics and appliances while they are turned off – accounts for 5–10% of total household energy consumption, costing$100 annually to the average household in the United States.[51] A study by United States Department of Energy‘s Berkeley Lab found that a videocassette recorders (VCRs) consume more electricity during the course of a year in standby mode than when they are used to record or playback videos. Similar findings were obtained concerning satellite boxes, which consume almost the same amount of energy in “on” and “off” modes.[52]

A 2012 study in the United Kingdom, carried out by the Energy Saving Trust, found that the devices using the most power on standby mode included televisions, satellite boxes and other video and audio equipment. The study concluded that UK households could save up to £86 per year by switching devices off instead of using standby mode.[53] A report from the International Energy Agency in 2014 found that \$80 billion of power is wasted globally per year due to inefficiency of electronic devices.[54] Consumers can reduce unwanted use of standby power by unplugging their devices, using power strips with switches, or by buying devices that are standardized for better energy management, particularly Energy Star marked products.[51]

### Electronic waste

A high number of different metals and low concentration rates in electronics means that recycling is limited and energy intensive.[48] Electronic waste describes discarded electrical or electronic devices. Many consumer electronics may contain toxic minerals and elements,[55] and many electronic scrap components, such as CRTs, may contain contaminants such as leadcadmiumberylliummercurydioxins, or brominated flame retardantsElectronic waste recycling may involve significant risk to workers and communities and great care must be taken to avoid unsafe exposure in recycling operations and leaking of materials such as heavy metals from landfills and incinerator ashes. However, large amounts of the produced electronic waste from developed countries is exported, and handled by the informal sector in countries like India, despite the fact that exporting electronic waste to them is illegal. Strong informal sector can be a problem for the safe and clean recycling.[56]

Reuse and repair

E-waste policy has gone through various incarnations since the 1970s, with emphases changing as the decades passed. More weight was gradually placed on the need to dispose of e-waste more carefully due to the toxic materials it may contain. There has also been recognition that various valuable metals and plastics from waste electrical equipment can be recycled for other uses. More recently the desirability of reusing whole appliances has been foregrounded in the ‘preparation for reuse’ guidelines. The policy focus is slowly moving towards a potential shift in attitudes to reuse and repair.

## 2K Monitor Best of 2021

With turnover of small household appliances high and costs relatively low, many consumers will throw unwanted electric goods in the normal dustbin, meaning that items of potentially high reuse or recycling value go to landfills. While larger items such as washing machines are usually collected, it has been estimated that the 160,000 tonnes of EEE in regular waste collections was worth £220 million. And 23% of EEE taken to Household Waste Recycling Centres was immediately resaleable – or would be with minor repairs or refurbishment. This indicates a lack of awareness among consumers as to where and how to dispose of EEE, and of the potential value of things that are literally going in the bin.

For reuse and repair of electrical goods to increase substantially in the UK there are barriers that must be overcome. These include people’s mistrust of used equipment in terms of whether it will be functional, safe, and the stigma for some of owning second-hand goods. But the benefits of reuse could allow lower income households access to previously unaffordable technology whilst helping the environment at the same time.<Cole, C., Cooper, T. and Gnanapragasam, A., 2016. Extending product lifetimes through WEEE reuse and repair: opportunities and challenges in the UK. In: Electronics Goes Green 2016+ Conference, Berlin, Germany, 7–9 September 2016>

## Health impact

Desktop monitors and laptops produce major physical health concerns for humans when bodies are forced into positions that are unhealthy and uncomfortable in order to see the screen better. From this, neck and back pains and problems increase, commonly referred to as repetitive strain injuries. Using electronics before going to bed makes it difficult for people to fall asleep, which has a negative effect on human health. Sleeping less prevents people from performing to their full potential physically and mentally and can also “increase rates of obesity and diabetes,” which are “long-term health consequences”.[57] Obesity and diabetes are more commonly seen in students and in youth because they tend to be the ones using electronics the most. “People who frequently use their thumbs to type text messages on cell phones can develop a painful affliction called De Quervain syndrome that affects their tendons on their hands. The best known disease in this category is called carpal tunnel syndrome, which results from pressure on the median nerve in the wrist”.[57]

## History

The binary number system was refined by Gottfried Wilhelm Leibniz (published in 1705) and he also established that by using the binary system, the principles of arithmetic and logic could be joined. Digital logic as we know it was the brain-child of George Boole in the mid 19th century. In an 1886 letter, Charles Sanders Peirce described how logical operations could be carried out by electrical switching circuits.[2] Eventually, vacuum tubes replaced relays for logic operations. Lee De Forest‘s modification, in 1907, of the Fleming valve can be used as an AND gateLudwig Wittgenstein introduced a version of the 16-row truth table as proposition 5.101 of Tractatus Logico-Philosophicus (1921). Walther Bothe, inventor of the coincidence circuit, shared the 1954 Nobel Prize in physics, for the first modern electronic AND gate in 1924.

## 2K Monitor Best of 2021

Mechanical analog computers started appearing in the first century and were later used in the medieval era for astronomical calculations. In World War II, mechanical analog computers were used for specialized military applications such as calculating torpedo aiming. During this time the first electronic digital computers were developed. Originally they were the size of a large room, consuming as much power as several hundred modern personal computers (PCs).[3]

## 2K Monitor Best of 2021

The Z3 was an electromechanical computer designed by Konrad Zuse. Finished in 1941, it was the world’s first working programmable, fully automatic digital computer.[4] Its operation was facilitated by the invention of the vacuum tube in 1904 by John Ambrose Fleming.

## 2K Monitor Best of 2021

At the same time that digital calculation replaced analog, purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents. John Bardeen and Walter Brattain invented the point-contact transistor at Bell Labs in 1947, followed by William Shockley inventing the bipolar junction transistor at Bell Labs in 1948.[5][6]

## 2K Monitor Best of 2021

At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of vacuum tubes.[7] Their first transistorised computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the “second generation” of computers. Compared to vacuum tubes, transistors were smaller, more reliable, had indefinite lifespans, and required less power than vacuum tubes – thereby giving off less heat, and allowing much denser concentrations of circuits, up to tens of thousands in a relatively compact space.

## 2K Monitor Best of 2021

While working at Texas Instruments in July 1958, Jack Kilby recorded his initial ideas concerning the integrated circuit (IC), then successfully demonstrated the first working integrated on 12 September 1958.[8] Kilby’s chip was made of germanium. The following year, Robert Noyce at Fairchild Semiconductor invented the silicon integrated circuit. The basis for Noyce’s silicon IC was the planar process, developed in early 1959 by Jean Hoerni, who was in turn building on Mohamed Atalla‘s silicon surface passivation method developed in 1957.[9] This new technique, the integrated circuit, allowed for quick, low-cost fabrication of complex circuits by having a set of electronic circuits on one small plate (“chip”) of semiconductor material, normally silicon.

## 2K Monitor Best of 2021

The metal–oxide–semiconductor field-effect transistor (MOSFET), also known as the MOS transistor, was invented by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959.[10][11][12] The MOSFET’s advantages include high scalability,[13] affordability,[14] low power consumption, and high transistor density.[15] Its rapid on–off electronic switching speed also makes it ideal for generating pulse trains,[16] the basis for electronic digital signals,[17][18] in contrast to BJTs which more slowly generate analog signals resembling sine waves.[16] Along with MOS large-scale integration (LSI), these factors make the MOSFET an important switching device for digital circuits.[19] The MOSFET revolutionized the electronics industry,[20][21] and is the most common semiconductor device.[11][22] MOSFETs are the fundamental building blocks of digital electronics, during the Digital Revolution of the late 20th to early 21st centuries.[12][23][24] This paved the way for the Digital Age of the early 21st century.[12]

## 2K Monitor Best of 2021

In the early days of integrated circuits, each chip was limited to only a few transistors, and the low degree of integration meant the design process was relatively simple. Manufacturing yields were also quite low by today’s standards. The wide adoption of the MOSFET transistor by the early 1970s led to the first large-scale integration (LSI) chips with more than 10,000 transistors on a single chip.[25] Following the wide adoption of CMOS, a type of MOSFET logic, by the 1980s, millions and then billions of MOSFETs could be placed on one chip as the technology progressed,[26] and good designs required thorough planning, giving rise to new design methods. The transistor count of both, individual devices and total production rose to unprecedented heights. The total amount of transistors produced until 2018 has been estimated to be 1.3×1022 (13 sextillion).[27]

## 2K Monitor Best ofs

The wireless revolution, the introduction and proliferation of wireless networks, began in the 1990s and was enabled by the wide adoption of MOSFET-based RF power amplifiers (power MOSFET and LDMOS) and RF circuits (RF CMOS).[28][29][30] Wireless networks allowed for public digital transmission without the need for cables, leading to digital television (digital TV), GPSsatellite radiowireless Internet and mobile phones through the 1990s–2000s.

## 2K Monitor Best ofs

Discrete cosine transform (DCT) coding, a data compression technique first proposed by Nasir Ahmed in 1972,[31] enabled practical digital media transmission,[32][33][34] with image compression formats such as JPEG (1992), video coding formats such as H.26x (1988 onwards) and MPEG (1993 onwards),[35] audio coding standards such as Dolby Digital (1991)[36][37] and MP3 (1994),[35] and digital TV standards such as video-on-demand (VOD)[32] and high-definition television (HDTV).[38] Internet video was popularized by YouTube, an online video platform founded by Chad HurleyJawed Karim and Steve Chen in 2005, which enabled the video streaming of MPEG-4 AVC (H.264) user-generated content from anywhere on the World Wide Web.[39]

## 2K Monitor Best ofs

An advantage of digital circuits when compared to analog circuits is that signals represented digitally can be transmitted without degradation caused by noise.[40] For example, a continuous audio signal transmitted as a sequence of 1s and 0s, can be reconstructed without error, provided the noise picked up in transmission is not enough to prevent identification of the 1s and 0s.

## 2K Monitor Best ofs

In a digital system, a more precise representation of a signal can be obtained by using more binary digits to represent it. While this requires more digital circuits to process the signals, each digit is handled by the same kind of hardware, resulting in an easily scalable system. In an analog system, additional resolution requires fundamental improvements in the linearity and noise characteristics of each step of the signal chain.

## 2K Monitor Best ofs

With computer-controlled digital systems, new functions to be added through software revision and no hardware changes. Often this can be done outside of the factory by updating the product’s software. So, the product’s design errors can be corrected after the product is in a customer’s hands.

## 2K Monitor Best ofs

Information storage can be easier in digital systems than in analog ones. The noise immunity of digital systems permits data to be stored and retrieved without degradation. In an analog system, noise from aging and wear degrade the information stored. In a digital system, as long as the total noise is below a certain level, the information can be recovered perfectly. Even when more significant noise is present, the use of redundancy permits the recovery of the original data provided too many errors do not occur.

## 2K Monitor Best ofs

In some cases, digital circuits use more energy than analog circuits to accomplish the same tasks, thus producing more heat which increases the complexity of the circuits such as the inclusion of heat sinks. In portable or battery-powered systems this can limit use of digital systems. For example, battery-powered cellular telephones often use a low-power analog front-end to amplify and tune in the radio signals from the base station. However, a base station has grid power and can use power-hungry, but very flexible software radios. Such base stations can be easily reprogrammed to process the signals used in new cellular standards.

Many useful digital systems must translate from continuous analog signals to discrete digital signals. This causes quantization errors. Quantization error can be reduced if the system stores enough digital data to represent the signal to the desired degree of fidelity. The Nyquist–Shannon sampling theorem provides an important guideline as to how much digital data is needed to accurately portray a given analog signal.

In some systems, if a single piece of digital data is lost or misinterpreted, the meaning of large blocks of related data can completely change. For example, a single-bit error in audio data stored directly as linear pulse code modulation causes, at worst, a single click. Instead, many people use audio compression to save storage space and download time, even though a single bit error may cause a larger disruption.

Because of the cliff effect, it can be difficult for users to tell if a particular system is right on the edge of failure, or if it can tolerate much more noise before failing. Digital fragility can be reduced by designing a digital system for robustness. For example, a parity bit or other error management method can be inserted into the signal path. These schemes help the system detect errors, and then either correct the errors, or request retransmission of the data.

## Construction

2K Monitor Best of 2021

A digital circuit is typically constructed from small electronic circuits called logic gates that can be used to create combinational logic. Each logic gate is designed to perform a function of boolean logic when acting on logic signals. A logic gate is generally created from one or more electrically controlled switches, usually transistors but thermionic valves have seen historic use. The output of a logic gate can, in turn, control or feed into more logic gates.

Another form of digital circuit is constructed from lookup tables, (many sold as “programmable logic devices“, though other kinds of PLDs exist). Lookup tables can perform the same functions as machines based on logic gates, but can be easily reprogrammed without changing the wiring. This means that a designer can often repair design errors without changing the arrangement of wires. Therefore, in small volume products, programmable logic devices are often the preferred solution. They are usually designed by engineers using electronic design automation software.

Integrated circuits consist of multiple transistors on one silicon chip, and are the least expensive way to make large number of interconnected logic gates. Integrated circuits are usually interconnected on a printed circuit board which is a board which holds electrical components, and connects them together with copper traces.

## Design

Engineers use many methods to minimize logic redundancy in order to reduce the circuit complexity. Reduced complexity reduces component count and potential errors and therefore typically reduces cost. Logic redundancy can be removed by several well-known techniques, such as binary decision diagramsBoolean algebraKarnaugh maps, the Quine–McCluskey algorithm, and the heuristic computer method. These operations are typically performed within a computer-aided design system.

Embedded systems with microcontrollers and programmable logic controllers are often used to implement digital logic for complex systems that don’t require optimal performance. These systems are usually programmed by software engineers or by electricians, using ladder logic.

### Representation

Representations are crucial to an engineer’s design of digital circuits. To choose representations, engineers consider types of digital systems.

The classical way to represent a digital circuit is with an equivalent set of logic gates. Each logic symbol is represented by a different shape. The actual set of shapes was introduced in 1984 under IEEE/ANSI standard 91-1984 and is now in common use by integrated circuit manufacturers.[41] Another way is to construct an equivalent system of electronic switches (usually transistors). This can be represented as a truth table.

Most digital systems divide into combinational and sequential systems. A combinational system always presents the same output when given the same inputs. A sequential system is a combinational system with some of the outputs fed back as inputs. This makes the digital machine perform a sequence of operations. The simplest sequential system is probably a flip flop, a mechanism that represents a binary digit or “bit“. Sequential systems are often designed as state machines. In this way, engineers can design a system’s gross behavior, and even test it in a simulation, without considering all the details of the logic functions.

Sequential systems divide into two further subcategories. “Synchronous” sequential systems change state all at once when a clock signal changes state. “Asynchronous” sequential systems propagate changes whenever inputs change. Synchronous sequential systems are made of well-characterized asynchronous circuits such as flip-flops, that change only when the clock changes, and which have carefully designed timing margins.

For logic simulation, digital circuit representations have digital file formats that can be processed by computer programs.

### Synchronous systems

2K Monitor Best of 2021

A 4-bit ring counter using D-type flip flops is an example of synchronous logic. Each device is connected to the clock signal, and update together.

The usual way to implement a synchronous sequential state machine is to divide it into a piece of combinational logic and a set of flip flops called a state register. The state register represents the state as a binary number. The combinational logic produces the binary representation for the next state. On each clock cycle, the state register captures the feedback generated from the previous state of the combinational logic and feeds it back as an unchanging input to the combinational part of the state machine. The clock rate is limited by the most time-consuming logic calculation in the combinational logic.

### Asynchronous systems

Most digital logic is synchronous because it is easier to create and verify a synchronous design. However, asynchronous logic has the advantage of its speed not being constrained by an arbitrary clock; instead, it runs at the maximum speed of its logic gates.[a] Building an asynchronous system using faster parts makes the circuit faster.

Nevertheless, most systems need to accept external unsynchronized signals into their synchronous logic circuits. This interface is inherently asynchronous and must be analyzed as such. Examples of widely used asynchronous circuits include synchronizer flip-flops, switch debouncers and arbiters.

Asynchronous logic components can be hard to design because all possible states, in all possible timings must be considered. The usual method is to construct a table of the minimum and maximum time that each such state can exist and then adjust the circuit to minimize the number of such states. The designer must force the circuit to periodically wait for all of its parts to enter a compatible state (this is called “self-resynchronization”). Without careful design, it is easy to accidentally produce asynchronous logic that is unstable, that is, real electronics will have unpredictable results because of the cumulative delays caused by small variations in the values of the electronic components.

### Register transfer systems

2K Monitor Best of 2021

Example of a simple circuit with a toggling output. The inverter forms the combinational logic in this circuit, and the register holds the state.

Many digital systems are data flow machines. These are usually designed using synchronous register transfer logic, using hardware description languages such as VHDL or Verilog.

In register transfer logic, binary numbers are stored in groups of flip flops called registers. A sequential state machine controls when each register accepts new data from its input. The outputs of each register are a bundle of wires called a bus that carries that number to other calculations. A calculation is simply a piece of combinational logic. Each calculation also has an output bus, and these may be connected to the inputs of several registers. Sometimes a register will have a multiplexer on its input so that it can store a number from any one of several buses.[b]

Asynchronous register-transfer systems (such as computers) have a general solution. In the 1980s, some researchers discovered that almost all synchronous register-transfer machines could be converted to asynchronous designs by using first-in-first-out synchronization logic. In this scheme, the digital machine is characterized as a set of data flows. In each step of the flow, a synchronization circuit determines when the outputs of that step are valid and instructs the next stage when to use these outputs.[citation needed]

### Computer design

2K Monitor Best of 2021

Intel 80486DX2 microprocessor

The most general-purpose register-transfer logic machine is a computer. This is basically an automatic binary abacus. The control unit of a computer is usually designed as a microprogram run by a microsequencer. A microprogram is much like a player-piano roll. Each table entry or “word” of the microprogram commands the state of every bit that controls the computer. The sequencer then counts, and the count addresses the memory or combinational logic machine that contains the microprogram. The bits from the microprogram control the arithmetic logic unitmemory and other parts of the computer, including the microsequencer itself. A “specialized computer” is usually a conventional computer with special-purpose control logic or microprogram.

In this way, the complex task of designing the controls of a computer is reduced to a simpler task of programming a collection of much simpler logic machines.

Almost all computers are synchronous. However, numerous true asynchronous computers have also been built. One example is the Aspida DLX core.[43] Another was offered by ARM Holdings. Speed advantages have not materialized, because modern computer designs already run at the speed of their slowest component, usually memory. These do use somewhat less power because a clock distribution network is not needed. An unexpected advantage is that asynchronous computers do not produce spectrally-pure radio noise, so they are used in some mobile-phone base-station controllers. They may be more secure in cryptographic applications because their electrical and radio emissions can be more difficult to decode.[44]

### Computer architecture

Computer architecture is a specialized engineering activity that tries to arrange the registers, calculation logic, buses and other parts of the computer in the best way for some purpose. Computer architects have applied large amounts of ingenuity to computer design to reduce the cost and increase the speed and immunity to programming errors of computers. An increasingly common goal is to reduce the power used in a battery-powered computer system, such as a cell-phone. Many computer architects serve an extended apprenticeship as microprogrammers.