Tuesday, 16 August 2016

LAPTOP @ 9510rs at amazon

iBall Excelance CompBook 11.6-inch Laptop (Atom Z3735F/2GB/32GB/Windows 10/Integrated Graphics)


click  link to buy
BUY

Tuesday, 10 February 2015

WEB CAMERAS

webcam is a video camera that feeds or streams its image in real time to or through a computer to computer network. When "captured" by the computer, the video stream may be saved, viewed or sent on to other networks via systems such as the internet, and email as an attachment. When sent to a remote location, the video stream may be saved, viewed or on sent there. Unlike an IP camera (which connects using Ethernet or Wi-Fi), a webcam is generally connected by a USBcable, or similar cable, or built into computer hardware, such as laptops.
Their most popular use is the establishment of video links, permitting computers to act as videophones or videoconference stations. Other popular uses include security surveillance, computer vision, video broadcasting, and for recording social videos.
Webcams are known for their low manufacturing cost and flexibility,[1] making them the lowest cost form of videotelephony. They have also become a source of security and privacy issues, as some built-in webcams can be remotely activated viaspyware.

Video security

Webcams can be used as security cameras. Software is available to allow PC-connected cameras to watch for movement and sound,[10] recording both when they are detected. These recordings can then be saved to the computer, e-mailed, or uploaded to the Internet. In one well-publicised case,[11] a computer e-mailed images of the burglar during the theft of the computer, enabling the owner to give police a clear picture of the burglar's face even after the computer had been stolen.
Recently webcam privacy software has been introduced by such companies such as Stop Being Watched or Webcamlock. The software exposes access to a webcam and prompts the user to allow or deny access by showing what program is trying to access the webcam. Allowing the user to accept a trusted program the user recognizes or terminate the attempt immediately. Other companies on the market manufacture and sell sliding lens covers that allow users to retrofit the computer and close access to the camera lens.
In December 2011, Russia announced that 290,000 Webcams would be installed in 90,000 polling stations to monitor the Russian presidential election, 2012.[12]

Video clips and stills

Webcams can be used to take video clips and still pictures. Various software tools in wide use can be employed for this, such as PicMaster (for use with Windowsoperating systems), Photo Booth (Mac), or Cheese (with Unix systems). For a more complete list see Comparison of webcam software.

Input control devices

Special software can use the video stream from a webcam to assist or enhance a user's control of applications and games. Video features, including faces, shapes, models and colors can be observed and tracked to produce a corresponding form of control. For example, the position of a single light source can be tracked and used to emulate a mouse pointer, a head mounted light would enable hands-free computing and would greatly improve computer accessibility. This can be applied to games, providing additional control, improved interactivity and immersiveness.
FreeTrack is a free webcam motion tracking application for Microsoft Windows that can track a special head mounted model in up to six degrees of freedom and output data to mouse, keyboard, joystick and FreeTrack-supported games. By removing the IR filter of the webcam, IR LEDs can be used, which has the advantage of being invisible to the naked eye, removing a distraction from the user. TrackIR is a commercial version of this technology.
The EyeToy for the PlayStation 2PlayStation Eye for the PlayStation 3, and the Xbox Live Vision camera and Kinect motion sensor for the Xbox 360 and are color digital cameras that have been used as control input devices by some games.
Small webcam-based PC games are available as either standalone executables or inside web browser windows using Adobe Flash.

Astro photography

With very-low-light capability, a few specific models of webcams are very popular to photograph the night sky by astronomers and astro photographers. Mostly, these are manual focus cameras and contain an old CCD panel instead of comparatively newer CMOS panels. The lenses of the cameras are removed and then these are attached to telescopes to record images, video, still, or both. In newer techniques, videos of very faint objects are taken for a couple of seconds and then all the frames of the video are 'stacked' together to obtain a still image of respectable contrast. Philips PCVC 740K and SPC 900 are two of the few webcams liked by astro photographers.

Technology

Webcams typically include a lens (shown at top), an image sensor (shown at bottom), and supporting circuitry.
Webcams typically include a lens, an image sensor, support electronics, and may also include a microphone for sound. Various lenses are available, the most common in consumer-grade webcams being a plastic lens that can be screwed in and out to focus the camera. Fixed focus lenses, which have no provision for adjustment, are also available. As a camera system's depth of field is greater for small image formats and is greater for lenses with a large f-number (small aperture), the systems used in webcams have a sufficiently large depth of field that the use of a fixed focus lens does not impact image sharpness to a great extent.
Image sensors can be CMOS or CCD, the former being dominant for low-cost cameras, but CCD cameras do not necessarily outperform CMOS-based cameras in the low cost price range. Most consumer webcams are capable of providing VGA resolution video at a frame rate of 30 frames per second. Many newer devices can produce video in multi-megapixel resolutions, and a few can run at high frame rates such as the PlayStation Eye, which can produce 320×240video at 120 frames per second.
Support electronics read the image from the sensor and transmit it to the host computer. The camera pictured to the right, for example, uses a Sonix SN9C101 to transmit its image over USB. Typically, each frame is transmitted uncompressed inRGB or YUV or compressed as JPEG. Some cameras, such as mobile phone cameras, use a CMOS sensor with supporting electronics "on die", i.e. the sensor and the support electronics are built on a single silicon chip to save space and manufacturing costs. Most webcams feature built-in microphones to make video calling and videoconferencing more convenient.
The USB video device class (UVC) specification allows for interconnectivity of webcams to computers without the need for proprietary device drivers. Microsoft Windows XP SP2, Linux[13] and Mac OS X (since October 2005) have UVC support built in and do not require extra device drivers, although they are often installed to add additional features.

Privacy

Many users do not wish the continuous exposure for which webcams were originally intended, but rather prefer privacy.[14]Such privacy is lost when malware allow malicious hackers to activate the webcam without the user's knowledge, providing the hackers with a live video and audio feed.[15] Cameras such as Apple's older external iSight cameras include lens covers to thwart this. Some webcams have built-in hardwired LED indicators that light up whenever the camera is active sometimes only in video mode[citation needed]. It is not clear whether these indicators can be circumvented when webcams are surreptitiously activated without the user's knowledge or intent, via spyware.[citation needed]
In the field of computer security, camfecting is the fraudulent process of attempting to hack into a person's webcam and activate it without the webcam owner's permission. The remotely activated webcam can be used to watch anything within the webcam's field of vision, sometimes the webcam owner itself. Camfecting is most often carried out by infecting the victim's computer with a virus that can provide the hacker access to the victim's webcam. This attack is specifically targeted at the victim's webcam, and hence the name camfecting, a portmanteau of the words cam and infecting.
In January 2005, some search engine queries were published in an online forum[16] which allow anyone to find thousands of Panasonic- and Axis high-end web cameras, provided that they have a web-based interface for remote viewing. Many such cameras are running on default configuration, which does not require anypassword login or IP address verification, making them viewable by anyone.
Some laptop computers have built in webcams which present both privacy and security issues, as such cameras cannot normally be physically disabled if hijacked by a Trojan Horse program or other similar spyware programs. In the 2010 Robbins v. Lower Merion School District "WebcamGate" case, plaintiffs charged that two suburban Philadelphia high schools secretly spied on students - by surreptitiously remotely activating iSight webcams embedded in school-issued MacBook laptops the students were using at home — and thereby infringed on their privacy rights. School authorities admitted to secretly snapping over 66,000 photographs, including shots of students in the privacy of their bedrooms, including some with teenagers in various state of undress.[17][18] The school board involved quickly disabled their laptop spyware program after parents filed lawsuits against the board and various individuals.[19][20]
To protect their privacy, many users use intelligent security[vague] and/or a privacy camera cover. The latter is claimed[by whom?] to be the most reliable form of protection.[citation needed]

Effects on modern society

Webcams allow for inexpensive, real-time video chat and webcasting, in both amateur and professional pursuits. They are frequently used in online dating and for online personal services offered mainly by women when Camgirling. However, the ease of webcam use through the Internet for video chat has also caused issues. For example, moderation system of various video chat websites such as Omegle has been criticized as being ineffective, with sexual content still rampant.[21] In an 2013 case, the transmission of nude photos and videos via Omegle from a teenage girl to a schoolteacher resulted in a child pornography charge.[22]
YouTube is a popular website hosting many videos made using webcams. News websites such as the BBC also produce professional live news videos using webcams rather than traditional cameras.[23][citation needed]
Webcams can also encourage telecommuting, enabling people to work from home via the Internet, rather than traveling to their office.
The popularity of webcams among teenagers with Internet access has raised concern about the use of webcams for cyber-bullying.[24] Webcam recordings of teenagers, including underage teenagers, are frequently posted on popular Web forums and imageboards such as 4chan.[25][26]

Descriptive names and terminology

Videophone calls (also: videocalls and video chat),[27] differ from videoconferencing in that they expect to serve individuals, not groups.[28] However that distinction has become increasingly blurred with technology improvements such as increased bandwidth and sophisticated software clients that can allow for multiple parties on a call. In general everyday usage the term videoconferencing is now frequently used instead of videocall for point-to-point calls between two units. Both videophone calls and videoconferencing are also now commonly referred to as a video link.
Webcams are popular, relatively low cost devices which can provide live video and audio streams via personal computers, and can be used with many software clients for both video calls and videoconferencing.[29]
videoconference system is generally higher cost than a videophone and deploys greater capabilities. A videoconference (also known as a videoteleconference) allows two or more locations to communicate via live, simultaneous two-way video and audio transmissions. This is often accomplished by the use of a multipoint control unit (a centralized distribution and call management system) or by a similar non-centralized multipoint capability embedded in each videoconferencing unit. Again, technology improvements have circumvented traditional definitions by allowing multiple party videoconferencing via web-based applications.[30][31] A separate webpage article is devoted to videoconferencing.
telepresence system is a high-end videoconferencing system and service usually employed by enterprise-level corporate offices. Telepresence conference rooms use state-of-the art room designs, video cameras, displays, sound-systems and processors, coupled with high-to-very-high capacity bandwidth transmissions.
Typical use of the various technologies described above include calling or conferencing on a one-on-one, one-to-many or many-to-many basis for personal, business, educational, deaf Video Relay Service and tele-medical, diagnostic and rehabilitative use or services. New services utilizing videocalling and videoconferencing, such as teachers and psychologists conducting online sessions,[32] personal videocalls to inmates incarcerated in penitentiaries, and videoconferencing to resolve airline engineering issues at maintenance facilities, are being created or evolving on an on-going basis.

Monday, 9 February 2015

MOUSE

Mouse (computing)



A computer mouse with the most common standard features: two buttons and a scroll wheel, which can also act as a third button.
In computing, a mouse is a pointing device that detects two-dimensional motion relative to a surface. This motion is typically translated into the motion of a pointer on a display, which allows for fine control of a graphical user interface.

Physically, a mouse consists of an object held in one's hand, with one or more buttons. Mice often also feature other elements, such as touch surfaces and "wheels", which enable additional control and dimensional input.


Naming

The earliest known publication of the term mouse as a computer pointing device is in Bill English's 1965 publication "Computer-Aided Display Control".[1]

The online Oxford Dictionaries entry for mouse states the plural for the small rodent is mice, while the plural for the small computer connected device is either mice or mouses. However, in the use section of the entry it states that the more common plural is mice, and that the first recorded use of the term in the plural is mice as well[2] (though it cites a 1984 use of mice when there were actually several earlier ones, such as J. C. R. Licklider's "The Computer as a Communication Device" of 1968[3]). According to the fifth edition of The American Heritage Dictionary of the English Language the plural can be either "mice" or "mouses".[4]

History

The trackball, a related pointing device, was invented in 1946 by Ralph Benjamin as part of a post-World War II-era fire-control radar plotting system called Comprehensive Display System (CDS). Benjamin was then working for the British Royal Navy Scientific Service. Benjamin's project used analog computers to calculate the future position of target aircraft based on several initial input points provided by a user with a joystick. Benjamin felt that a more elegant input device was needed and invented a ball tracker[5] called roller ball[6] for this purpose.

The device was patented in 1947,[6] but only a prototype using a metal ball rolling on two rubber-coated wheels was ever built[5] and the device was kept as a military secret.[5]

Another early trackball was built by British electrical engineer Kenyon Taylor in collaboration with Tom Cranston and Fred Longstaff. Taylor was part of the original Ferranti Canada, working on the Royal Canadian Navy's DATAR (Digital Automated Tracking and Resolving) system in 1952.[7]

DATAR was similar in concept to Benjamin's display. The trackball used four disks to pick up motion, two each for the X and Y directions. Several rollers provided mechanical support. When the ball was rolled, the pickup discs spun and contacts on their outer rim made periodic contact with wires, producing pulses of output with each movement of the ball. By counting the pulses, the physical movement of the ball could be determined. A digital computer calculated the tracks, and sent the resulting data to other ships in a task force using pulse-code modulation radio signals. This trackball used a standard Canadian five-pin bowling ball. It was not patented, as it was a secret military project as well.[8][9]


Early mouse patents. From left to right: Opposing track wheels by Engelbart, Nov. 1970, U.S. Patent 3,541,541. Ball and wheel by Rider, Sept. 1974, U.S. Patent 3,835,464. Ball and two rollers with spring by Opocensky, Oct. 1976, U.S. Patent 3,987,685
Independently, Douglas Engelbart at the Stanford Research Institute (now SRI International) invented his first mouse prototype in the 1960s with the assistance of his lead engineer Bill English.[10] They christened the device the mouse as early models had a cord attached to the rear part of the device looking like a tail and generally resembling the common mouse.[11] Engelbart never received any royalties for it, as his employer SRI held the patent, which ran out before it became widely used in personal computers.[12] The invention of the mouse was just a small part of Engelbart's much larger project, aimed at augmenting human intellect via the Augmentation Research Center.[13][14]


Inventor Douglas Engelbart's computer mouse, showing the wheels that make contact with the working surface.
Several other experimental pointing-devices developed for Engelbart's oN-Line System (NLS) exploited different body movements – for example, head-mounted devices attached to the chin or nose – but ultimately the mouse won out because of its speed and convenience.[15] The first mouse, a bulky device (pictured) used two wheels perpendicular to each other: the rotation of each wheel translated into motion along one axis. At the time of the "Mother of All Demos", Englebart's group had been using their second generation, 3-button mouse for about a year. See the image of that mouse at Picture showing 2nd G mouse (A public domain version of this image would be nice.)

On 2 October 1968, just a few months before Engelbart released his demo on 9 December 1968, a mouse device named Rollkugel (German for "rolling ball") was released that had been developed and published by the German company Telefunken. As the name suggests and unlike Engelbart's mouse, the Telefunken model already had a ball. It was based on an earlier trackball-like device (also named Rollkugel) that was embedded into radar flight control desks. This had been developed around 1965 by a team led by Rainer Mallebrein at Telefunken Konstanz for the German Bundesanstalt für Flugsicherung as part of their TR 86 process computer system with its SIG 100-86[16] vector graphics terminal.


The first ball-based computer mouse in 1968, Telefunken Rollkugel RKS 100-86 for their TR 86 process computer system.
When the development for the Telefunken main frame TR 440 (de) began in 1965, Mallebrein and his team came up with the idea of "reversing" the existing Rollkugel into a moveable mouse-like device, so that customers did not have to be bothered with mounting holes for the earlier trackball device. Together with light pens and trackballs, it was offered as optional input device for their system since 1968. Some samples, installed at the Leibniz-Rechenzentrum in Munich in 1972, are still well preserved.[17][18] Telefunken considered the invention too small to apply for a patent on their device.

The Xerox Alto was one of the first computers designed for individual use in 1973, and is regarded as the grandfather of computers that utilize the mouse.[19] Inspired by PARC's Alto, the Lilith, a computer, which had been developed by a team around Niklaus Wirth at ETH Zürich between 1978 and 1980, provided a mouse as well. The third marketed version of an integrated mouse shipped as a part of a computer and intended for personal computer navigation came with the Xerox 8010 Star Information System in 1981. In 1982, Microsoft made the decision to make the MS-DOS program Microsoft Word mouse-compatible and developed the first PC-compatible mouse. Microsoft's mouse shipped in 1983, thus beginning Microsoft Hardware.[20] However, the mouse remained relatively obscure until the 1984 appearance of the Macintosh 128K, which included an updated version of the original Lisa Mouse[21] and the Atari ST in 1985.

Operation

Further information: Point and click
A mouse typically controls the motion of a pointer in two dimensions in a graphical user interface (GUI). The mouse turns movements of the hand backward and forward, left and right into equivalent electronic signals that in turn are used to move the pointer.

The relative movements of the mouse on the surface are applied to the position of the pointer on the screen, which signals the point where actions of the user take place, so that the hand movements are replicated by the pointer.[22] Clicking or hovering (stopping movement while the cursor is within the bounds of an area) can select files, programs or actions from a list of names, or (in graphical interfaces) through small images called "icons" and other elements. For example, a text file might be represented by a picture of a paper notebook, and clicking while the cursor hovers this icon might cause a text editing program to open the file in a window.

Different ways of operating the mouse cause specific things to happen in the GUI:[22]

Click: pressing and releasing a button.
(left) Single-click: clicking the main button.
(left) Double-click: clicking the button two times in quick succession counts as a different gesture than two separate single clicks.
(left) Triple-click: clicking the button three times in quick succession.
Right-click: clicking the secondary button.
Middle-click: clicking the tertiary button.
Drag: pressing and holding a button, then moving the mouse without releasing. (Using the command "drag with the right mouse button" instead of just "drag" when one instructs a user to drag an object while holding the right mouse button down instead of the more commonly used left mouse button.)
Button chording (a.k.a. Rocker navigation).
Combination of right-click then left-click.
Combination of left-click then right-click or keyboard letter.
Combination of left or right-click and the mouse wheel.
Clicking while holding down a modifier key.
Moving the pointer a long distance: When a practical limit of mouse movement is reached, one lifts up the mouse, brings it to the opposite edge of the working area while it is held above the surface, and then replaces it down onto the working surface. This is often not necessary, because acceleration software detects fast movement, and moves the pointer significantly faster in proportion than for slow mouse motion.
Mouse gestures[edit]
Users can also employ mice gesturally; meaning that a stylized motion of the mouse cursor itself, called a "gesture", can issue a command or map to a specific action. For example, in a drawing program, moving the mouse in a rapid "x" motion over a shape might delete the shape.

Gestural interfaces occur more rarely than plain pointing-and-clicking; and people often find them more difficult to use, because they require finer motor-control from the user. However, a few gestural conventions have become widespread, including the drag and drop gesture, in which:

The user presses the mouse button while the mouse cursor hovers over an interface object
The user moves the cursor to a different location while holding the button down
The user releases the mouse button
For example, a user might drag-and-drop a picture representing a file onto a picture of a trash can, thus instructing the system to delete the file.

Standard semantic gestures include:

Crossing-based goal
Drag and drop
Menu traversal
Pointing
Rollover (Mouseover)
Selection

Specific uses

Other uses of the mouse's input occur commonly in special application-domains. In interactive three-dimensional graphics, the mouse's motion often translates directly into changes in the virtual camera's orientation. For example, in the first-person shooter genre of games (see below), players usually employ the mouse to control the direction in which the virtual player's "head" faces: moving the mouse up will cause the player to look up, revealing the view above the player's head. A related function makes an image of an object rotate, so that all sides can be examined.

When mice have more than one button, software may assign different functions to each button. Often, the primary (leftmost in a right-handed configuration) button on the mouse will select items, and the secondary (rightmost in a right-handed) button will bring up a menu of alternative actions applicable to that item. For example, on platforms with more than one button, the Mozilla web browser will follow a link in response to a primary button click, will bring up a contextual menu of alternative actions for that link in response to a secondary-button click, and will often open the link in a new tab or window in response to a click with the tertiary (middle) mouse button.


Mouse mechanism diagram.svg
Operating an opto-mechanical mouse.
moving the mouse turns the ball.
X and Y rollers grip the ball and transfer movement
Optical encoding disks include light holes.
Infrared LEDs shine through the disks.
Sensors gather light pulses to convert to X and Y vectors.
The German company Telefunken published on their early ball mouse on October 2, 1968.[17] Telefunken's mouse was sold as optional equipment for their computer systems. Bill English, builder of Engelbart's original mouse,[23] created a ball mouse in 1972 while working for Xerox PARC.[24]

The ball mouse replaced the external wheels with a single ball that could rotate in any direction. It came as part of the hardware package of the Xerox Alto computer. Perpendicular chopper wheels housed inside the mouse's body chopped beams of light on the way to light sensors, thus detecting in their turn the motion of the ball. This variant of the mouse resembled an inverted trackball and became the predominant form used with personal computers throughout the 1980s and 1990s. The Xerox PARC group also settled on the modern technique of using both hands to type on a full-size keyboard and grabbing the mouse when required.


Mechanical mouse, shown with the top cover removed. The scroll wheel is grey, to the right of the ball.
The ball mouse has two freely rotating rollers. They are located 90 degrees apart. One roller detects the forward–backward motion of the mouse and other the left–right motion. Opposite the two rollers is a third one (white, in the photo, at 45 degrees) that is spring-loaded to push the ball against the other two rollers. Each roller is on the same shaft as an encoder wheel that has slotted edges; the slots interrupt infrared light beams to generate electrical pulses that represent wheel movement. Each wheel's disc, however, has a pair of light beams, located so that a given beam becomes interrupted, or again starts to pass light freely, when the other beam of the pair is about halfway between changes.

Simple logic circuits interpret the relative timing to indicate which direction the wheel is rotating. This incremental rotary encoder scheme is sometimes called quadrature encoding of the wheel rotation, as the two optical sensor produce signals that are in approximately quadrature phase. The mouse sends these signals to the computer system via the mouse cable, directly as logic signals in very old mice such as the Xerox mice, and via a data-formatting IC in modern mice. The driver software in the system converts the signals into motion of the mouse cursor along X and Y axes on the computer screen.


Hawley Mark II Mice from the Mouse House
The ball is mostly steel, with a precision spherical rubber surface. The weight of the ball, given an appropriate working surface under the mouse, provides a reliable grip so the mouse's movement is transmitted accurately. Ball mice and wheel mice were manufactured for Xerox by Jack Hawley, doing business as The Mouse House in Berkeley, California, starting in 1975.[25][26] Based on another invention by Jack Hawley, proprietor of the Mouse House, Honeywell produced another type of mechanical mouse.[27][28] Instead of a ball, it had two wheels rotating at off axes. Key Tronic later produced a similar product.[29]

Modern computer mice took form at the École Polytechnique Fédérale de Lausanne (EPFL) under the inspiration of Professor Jean-Daniel Nicoud and at the hands of engineer and watchmaker André Guignard.[30] This new design incorporated a single hard rubber mouseball and three buttons, and remained a common design until the mainstream adoption of the scroll-wheel mouse during the 1990s.[31] In 1985, René Sommer added a microprocessor to Nicoud's and Guignard's design.[32] Through this innovation, Sommer is credited with inventing a significant component of the mouse, which made it more "intelligent;"[32] though optical mice from Mouse Systems had incorporated microprocessors by 1984

Another type of mechanical mouse, the "analog mouse" (now generally regarded as obsolete), uses potentiometers rather than encoder wheels, and is typically designed to be plug compatible with an analog joystick. The "Color Mouse", originally marketed by RadioShack for their Color Computer (but also usable on MS-DOS machines equipped with analog joystick ports, provided the software accepted joystick input) was the best-known example.


Optical and laser mice


A wireless optical mouse on a mousepad

A standard wireless mouse and its connector
Main article: Optical mouse
Optical mice make use of one or more light-emitting diodes (LEDs) and an imaging array of photodiodes to detect movement relative to the underlying surface, rather than internal moving parts as does a mechanical mouse. A laser mouse is an optical mouse that uses coherent (laser) light.

The earliest optical mice detected movement on pre-printed mousepad surfaces, whereas the modern optical mouse works on most opaque surfaces; it is usually unable to detect movement on specular surfaces like glass. Laser diodes are also used for better resolution and precision. Battery powered, wireless optical mice flash the LED intermittently to save power, and only glow steadily when movement is detected.

Inertial and gyroscopic mice

Often called "air mice" since they do not require a surface to operate, inertial mice use a tuning fork or other accelerometer (US Patent 4787051, published in 1988) to detect rotary movement for every axis supported. The most common models (manufactured by Logitech and Gyration) work using 2 degrees of rotational freedom and are insensitive to spatial translation. The user requires only small wrist rotations to move the cursor, reducing user fatigue or "gorilla arm".

Usually cordless, they often have a switch to deactivate the movement circuitry between use, allowing the user freedom of movement without affecting the cursor position. A patent for an inertial mouse claims that such mice consume less power than optically based mice, and offer increased sensitivity, reduced weight and increased ease-of-use.[34] In combination with a wireless keyboard an inertial mouse can offer alternative ergonomic arrangements which do not require a flat work surface, potentially alleviating some types of repetitive motion injuries related to workstation posture.

3D mice[edit]
Also known as bats,[35] flying mice, or wands,[36] these devices generally function through ultrasound and provide at least three degrees of freedom. Probably the best known example would be 3Dconnexion/Logitech's SpaceMouse from the early 1990s. In the late 1990s Kantek introduced the 3D RingMouse. This wireless mouse was worn on a ring around a finger, which enabled the thumb to access three buttons. The mouse was tracked in three dimensions by a base station.[37] Despite a certain appeal, it was finally discontinued because it did not provide sufficient resolution.

A recent consumer 3D pointing device is the Wii Remote. While primarily a motion-sensing device (that is, it can determine its orientation and direction of movement), Wii Remote can also detect its spatial position by comparing the distance and position of the lights from the IR emitter using its integrated IR camera (since the nunchuk accessory lacks a camera, it can only tell its current heading and orientation). The obvious drawback to this approach is that it can only produce spatial coordinates while its camera can see the sensor bar.

A mouse-related controller called the SpaceBall[38] has a ball placed above the work surface that can easily be gripped. With spring-loaded centering, it sends both translational as well as angular displacements on all six axes, in both directions for each. In November 2010 a German Company called Axsotic introduced a new concept of 3D mouse called 3D Spheric Mouse. This new concept of a true six degree-of-freedom input device uses a ball to rotate in 3 axes without any limitations.[39]


Apple Desktop Bus


Apple Macintosh Plus mice: beige mouse (left), platinum mouse (right), 1986
In 1986 Apple first implemented the Apple Desktop Bus allowing the daisy-chaining together of up to 16 devices, including arbitrarily many mice and other devices on the same bus with no configuration whatsoever. Featuring only a single data pin, the bus used a purely polled approach to computer/mouse communications and survived as the standard on mainstream models (including a number of non-Apple workstations) until 1998 when iMac joined the industry-wide switch to using USB. Beginning with the Bronze Keyboard PowerBook G3 in May 1999, Apple dropped the external ADB port in favor of USB, but retained an internal ADB connection in the PowerBook G4 for communication with its built-in keyboard and trackpad until early 2005.

USB[edit]
The industry-standard USB (Universal Serial Bus) protocol and its connector have become widely used for mice; it is among the most popular types.[59]

Cordless or wireless


A wireless mouse made for notebook computers
Cordless or wireless mice transmit data via infrared radiation (see IrDA) or radio (including Bluetooth and Wi-Fi). The receiver is connected to the computer through a serial or USB port, or can be built in (as is sometimes the case with Bluetooth and WiFi[60]). Modern non-Bluetooth and non-WiFi wireless mice use USB receivers. Some of these can be stored inside the mouse for safe transport while not in use, while other, newer mice use newer "nano" receivers, designed to be small enough to remain plugged into a laptop during transport, while still being large enough to easily remove.

Atari standard joystick connectivity
The Amiga and the Atari ST use an Atari standard DE-9 connector for mice, the same connector that is used for joysticks on the same computers and numerous 8-bit systems, such as the Commodore 64 and the Atari 2600. However, the signals used for mice are different from those used for joysticks. As a result, plugging a mouse into a joystick port causes the "joystick" to continuously move in some direction, even if the mouse stays still, whereas plugging a joystick into a mouse port causes the "mouse" to only be able to move a single pixel in each direction.

Multiple-mouse systems
Some systems allow two or more mice to be used at once as input devices. 16-bit era home computers such as the Amiga used this to allow computer games with two players interacting on the same computer (Lemmings and The Settlers for example). The same idea is sometimes used in collaborative software, e.g. to simulate a whiteboard that multiple users can draw on without passing a single mouse around.

Microsoft Windows, since Windows 98, has supported multiple simultaneous pointing devices. Because Windows only provides a single screen cursor, using more than one device at the same time requires cooperation of users or applications designed for multiple input devices.

Multiple mice are often used in multi-user gaming in addition to specially designed devices that provide several input interfaces.

Windows also has full support for multiple input/mouse configurations for multiuser environments.

Starting with Windows XP, Microsoft introduced a SDK for developing applications that allow multiple input devices to be used at the same time with independent cursors and independent input points.[62]

The introduction of Vista and Microsoft Surface (now known as Microsoft PixelSense) introduced a new set of input APIs that were adopted into Windows 7, allowing for 50 points/cursors, all controlled by independent users. The new input points provide traditional mouse input; however, are designed for more advanced input technology like touch and image. They inherently offer 3D coordinates along with pressure, size, tilt, angle, mask, and even an image bitmap to see and recognize the input point/object on the screen.

As of 2009, Linux distributions and other operating systems that use X.Org, such as OpenSolaris and FreeBSD, support 255 cursors/input points through Multi-Pointer X. However, currently no window managers support Multi-Pointer X leaving it relegated to custom software usage.

There have also been propositions of having a single operator use two mice simultaneously as a more sophisticated means of controlling various graphics and multimedia applications.[63]

Buttons[edit]
Main article: Mouse button
Mouse buttons are microswitches which can be pressed to select or interact with an element of a graphical user interface, producing a distinctive clicking sound.

The three-button scrollmouse has become the most commonly available design. As of 2007 (and roughly since the late 1990s), users most commonly employ the second button to invoke a contextual menu in the computer's software user interface, which contains options specifically tailored to the interface element over which the mouse cursor currently sits. By default, the primary mouse button sits located on the left-hand side of the mouse, for the benefit of right-handed users; left-handed users can usually reverse this configuration via software.

Mouse speed[edit]
Mickeys per second is a unit of measurement for the speed and movement direction of a computer mouse.[53] But speed can also refer to the ratio between how many pixels the cursor moves on the screen and how far the mouse moves on the mouse pad, which may be expressed as pixels per Mickey, or pixels per inch, or pixels per cm. The directional movement is called the horizontal mickey count and the vertical mickey count.

The computer industry often measures mouse sensitivity in terms of counts per inch (CPI), commonly expressed as dots per inch (DPI) – the number of steps the mouse will report when it moves one inch. In early mice, this specification was called pulses per inch (ppi).[25] The Mickey originally referred to one of these counts, or one resolvable step of motion. If the default mouse-tracking condition involves moving the cursor by one screen-pixel or dot on-screen per reported step, then the CPI does equate to DPI: dots of cursor motion per inch of mouse motion. The CPI or DPI as reported by manufacturers depends on how they make the mouse; the higher the CPI, the faster the cursor moves with mouse movement. However, software can adjust the mouse sensitivity, making the cursor move faster or slower than its CPI. Current software can change the speed of the cursor dynamically, taking into account the mouse's absolute speed and the movement from the last stop-point. In most software, an example being the Windows platforms, this setting is named "speed" referring to "cursor precision". However, some operating systems name this setting "acceleration", the typical Apple OS designation. This term is in fact incorrect. The mouse acceleration, in the majority of mouse software, refers to the setting allowing the user to modify the cursor acceleration: the change in speed of the cursor over time while the mouse movement is constant.

For simple software, when the mouse starts to move, the software will count the number of "counts" or "mickeys" received from the mouse and will move the cursor across the screen by that number of pixels (or multiplied by a rate factor, typically less than 1). The cursor will move slowly on the screen, having a good precision. When the movement of the mouse passes the value set for "threshold", the software will start to move the cursor more quickly, with a greater rate factor. Usually, the user can set the value of the second rate factor by changing the "acceleration" setting.

Operating systems sometimes apply acceleration, referred to as "ballistics", to the motion reported by the mouse. For example, versions of Windows prior to Windows XP doubled reported values above a configurable threshold, and then optionally doubled them again above a second configurable threshold. These doublings applied separately in the X and Y directions, resulting in very nonlinear response.[64]

Mousepads[edit]
Main article: Mousepad
Engelbart's original mouse did not require a mousepad;[65] the mouse had two large wheels which could roll on virtually any surface. However, most subsequent mechanical mice starting with the steel roller ball mouse have required a mousepad for optimal performance.

The mousepad, the most common mouse accessory, appears most commonly in conjunction with mechanical mice, because to roll smoothly the ball requires more friction than common desk surfaces usually provide. So-called "hard mousepads" for gamers or optical/laser mice also exist.

Most optical and laser mice do not require a pad. Whether to use a hard or soft mousepad with an optical mouse is largely a matter of personal preference. One exception occurs when the desk surface creates problems for the optical or laser tracking, for example, a transparent or reflective surface.

In the marketplace[edit]

Computer mice built between 1986 and 2007
Around 1981 Xerox included mice with its Xerox Star, based on the mouse used in the 1970s on the Alto computer at Xerox PARC. Sun Microsystems, Symbolics, Lisp Machines Inc., and Tektronix also shipped workstations with mice, starting in about 1981. Later, inspired by the Star, Apple Computer released the Apple Lisa, which also used a mouse. However, none of these products achieved large-scale success. Only with the release of the Apple Macintosh in 1984 did the mouse see widespread use.[66]

The Macintosh design,[67] commercially successful and technically influential, led many other vendors to begin producing mice or including them with their other computer products (by 1986, Atari ST, Amiga, Windows 1.0, GEOS for the Commodore 64, and the Apple IIGS).[68]

The widespread adoption of graphical user interfaces in the software of the 1980s and 1990s made mice all but indispensable for controlling computers. In November 2008, Logitech built their billionth mouse.[69]

Use in games[edit]

Logitech G5 laser mouse designed for gaming
The Mac OS Desk Accessory Puzzle in 1984 was the first game designed specifically for a mouse.[70] The device often functions as an interface for PC-based computer games and sometimes for video game consoles.

First-person shooters[edit]

This section needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (August 2012)
First-person shooters naturally lend themselves to separate and simultaneous control of the player's movement and aim, and on computers this has traditionally been achieved with a combination of keyboard and mouse. Players use the X-axis of the mouse for looking (or turning) left and right, and the Y-axis for looking up and down; the keyboard is used for movement and supplemental inputs.

Many first person gamers prefer a mouse over a gamepad or joystick because the mouse is a linear input device, which allows for fast and precise control. Holding a gamepad or joystick in a given position produces a corresponding constant movement or rotation, i.e. the output is an integral of the user's input; in contrast, the output of a mouse directly corresponds to how far it is moved in a given direction (often multiplied by an "acceleration" factor derived from how quickly the mouse is moved). The effect of this is that a mouse is well suited to small, precise movements as well as large, quick movements, both of which are important in first person gaming.[71] This advantage also extends in varying degrees to other game styles, notably real-time strategy.

The left button usually controls primary fire. If the game supports multiple fire modes, the right button often provides secondary fire from the selected weapon. Games with only a single fire mode will generally map secondary fire to ironsights. In some games, the right button may also provide bonus options for a particular weapon, such as allowing access to the scope of a sniper rifle or allowing the mounting of a bayonet or silencer.

Gamers can use a scroll wheel for changing weapons (or for controlling scope-zoom magnification, in older games). On most first person shooter games, programming may also assign more functions to additional buttons on mice with more than three controls. A keyboard usually controls movement (for example, WASD for moving forward, left, backward and right, respectively) and other functions such as changing posture. Since the mouse serves for aiming, a mouse that tracks movement accurately and with less lag (latency) will give a player an advantage over players with less accurate or slower mice. In some cases the right mouse button may be used to move the player forward, either in lieu of, or in conjunction with the typical WASD configuration.

Many games provide players with the option of mapping their own choice of a key or button to a certain control.

An early technique of players, circle strafing, saw a player continuously strafing while aiming and shooting at an opponent by walking in circle around the opponent with the opponent at the center of the circle. Players could achieve this by holding down a key for strafing while continuously aiming the mouse towards the opponent.

Games using mice for input are so popular that many manufacturers make mice specifically for gaming. Such mice may feature adjustable weights, high-resolution optical or laser components, additional buttons, ergonomic shape, and other features such as adjustable CPI.

Many games, such as first- or third-person shooters, have a setting named "invert mouse" or similar (not to be confused with "button inversion", sometimes performed by left-handed users) which allows the user to look downward by moving the mouse forward and upward by moving the mouse backward (the opposite of non-inverted movement). This control system resembles that of aircraft control sticks, where pulling back causes pitch up and pushing forward causes pitch down; computer joysticks also typically emulate this control-configuration.

After id Software's Doom, the game that popularized first person shooter games but which did not support vertical aiming with a mouse (the y-axis served for forward/backward movement), competitor 3D Realms' Duke Nukem 3D became one of the first games that supported using the mouse to aim up and down. This and other games using the Build engine had an option to invert the Y-axis. The "invert" feature actually made the mouse behave in a manner that users now regard as non-inverted (by default, moving mouse forward resulted in looking down). Soon after, id Software released Quake, which introduced the invert feature as users now know it. Other games using the Quake engine have come on the market following this standard, likely due to the overall popularity of Quake.

Home consoles

In 1988, the educational video game system VTech Socrates featured a wireless mouse with an attached mouse pad as an optional controller used for some games. In the early 1990s the Super Nintendo Entertainment System video game system featured a mouse in addition to its controllers. The Mario Paint game in particular used the mouse's capabilities, as did its successor on the Nintendo 64. Sega released official mice for their Genesis/Mega Drive, Saturn and Dreamcast consoles. NEC sold official mice for its PC Engine and PC-FX consoles. Sony Computer Entertainment released an official mouse product for the PlayStation console, and included one along with the Linux for PlayStation 2 kit. However, users can attach virtually any USB mouse to the PlayStation 2 console. In addition the PlayStation 3 also fully supports USB mice. Recently the Wii also has this latest development added on in a recent software update.

KEYBOARD

In computing, a keyboard is a typewriter-style device, which uses an arrangement of buttons or keys, to act as mechanical levers or electronic switches. Following the decline of punch cards and paper tape, interaction via teleprinter-style keyboards became the main input device for computers.

A keyboard typically has characters engraved or printed on the keys and each press of a key typically corresponds to a single written symbol. However, to produce some symbols requires pressing and holding several keys simultaneously or in sequence. While most keyboard keys produce letters, numbers or signs (characters), other keys or simultaneous key presses can produce actions or execute computer commands.

Despite the development of alternative input devices, such as the mouse, touchscreen, pen devices, character recognition and voice recognition, the keyboard remains the most commonly used device for direct (human) input of alphanumeric data into computers.

In normal usage, the keyboard is used as a text entry interface to type text and numbers into a word processor, text editor or other programs. In a modern computer, the interpretation of key presses is generally left to the software. A computer keyboard distinguishes each physical key from every other and reports all key presses to the controlling software. Keyboards are also used for computer gaming, either with regular
 keyboards or by using keyboards with special gaming features, which can expedite frequent
ly used keystroke combinations. A keyboard is also used to give commands to the operating
 system of a computer, such as Windows' Control-Alt-Delete combination, which brings up a
task window or shuts down the machine. A command-line interface is a type of user
 interface operated entirely through a keyboard, or another device doing the job of one.
  History[edit]
While typewriters are the definitive ancestor of all key-based text entry devices, the computer keyboard as a device for electromechanical data entry and communication derives largely from the utility of two devices: teleprinters (or teletypes) and keypunches. It was through such devices that modern computer keyboards inherited their layouts.

As early as the 1870s, teleprinter-like devices were used to simultaneously type and transmit stock market text data from the keyboard across telegraph lines to stock ticker machines to be immediately copied and displayed onto ticker tape. The teleprinter, in its more contemporary form, was developed from 1907 to 1910 by American mechanical engineer Charles Krum and his son Howard, with early contributions by electrical engineer Frank Pearne. Earlier models were developed separately by individuals such as Royal Earl House and Frederick G. Creed.

Earlier, Herman Hollerith developed the first keypunch devices, which soon evolved to include keys for text and number entry akin to normal typewriters by the 1930s.

The keyboard on the teleprinter played a strong role in point-to-point and point-to-multipoint communication for most of the 20th century, while the keyboard on the keypunch device played a strong role in data entry and storage for just as long. The development of the earliest computers incorporated electric typewriter keyboards: the development of the ENIAC computer incorporated a keypunch device as both the input and paper-based output device, while the BINAC computer also made use of an electromechanically controlled typewriter for both data entry onto magnetic tape (instead of paper) and data output.

From the 1940s until the late 1960s, typewriters were the main means of data entry and output for computing, becoming integrated into what were known as computer terminals. Because of the limitations of terminals based upon printed text in comparison to the growth in data storage, processing and transmission, a general move toward video-based computer terminals was affected by the 1970s, starting with the Datapoint 3300 in 1967.

The keyboard remained the primary, most integrated computer peripheral well into the era of personal computing until the introduction of the mouse as a consumer device in 1984. By this time, text-only user interfaces with sparse graphics gave way to comparatively graphics-rich icons on screen. However, keyboards remain central to human-computer interaction to the present, even as mobile personal computing devices such as smartphones and tablets adapt the keyboard as an optional virtual, touchscreen-based means of data entry.

Sunday, 8 February 2015

Virtual memory

In computingvirtual memory is a memory management technique that is implemented using both hardware and software. It maps memory addresses used by a program, called virtual addresses, into physical addresses in computer memory. Main storage as seen by a process or task appears as a contiguous address space or collection of contiguoussegments. The operating system manages virtual address spaces and the assignment of real memory to virtual memory. Address translation hardware in the CPU, often referred to as a memory management unit or MMU, automatically translates virtual addresses to physical addresses. Software within the operating system may extend these capabilities to provide a virtual address space that can exceed the capacity of real memory and thus reference more memory than is physically present in the computer.
The primary benefits of virtual memory include freeing applications from having to manage a shared memory space, increased security due to memory isolation, and being able to conceptually use more memory than might be physically available, using the technique of paging.

Properties

Virtual memory makes application programming easier by hiding fragmentation of physical memory; by delegating to the kernel the burden of managing the memory hierarchy (eliminating the need for the program to handle overlays explicitly); and, when each process is run in its own dedicated address space, by obviating the need to relocate program code or to access memory with relative addressing.
Memory virtualization can be considered a generalization of the concept of virtual memory.

Usage

Virtual memory is an integral part of a modern computer architecture; implementations require hardware support, typically in the form of a memory management unit built into the CPU. While not necessary, emulators and virtual machines can employ hardware support to increase performance of their virtual memory implementations.[1] Consequently, older operating systems, such as those for the mainframes of the 1960s, and those for personal computers of the early to mid-1980s (e.g. DOS),[2] generally have no virtual memory functionality,[dubious ] though notable exceptions for mainframes of the 1960s include:
The Apple Lisa is an example of a personal computer of the 1980s that features virtual memory.
Most modern operating systems that support virtual memory also run each process in its own dedicated address space. Each program thus appears to have sole access to the virtual memory. However, some older operating systems (such as OS/VS1 and OS/VS2 SVS) and even modern ones (such as IBM i) are single address space operating systems that run all processes in a single address space composed of virtualized memory.
Embedded systems and other special-purpose computer systems that require very fast and/or very consistent response times may opt not to use virtual memory due to decreased determinism; virtual memory systems trigger unpredictable traps that may produce unwanted "jitter" during I/O operations. This is because embedded hardware costs are often kept low by implementing all such operations with software (a technique called bit-banging) rather than with dedicated hardware.

History

In the 1940s and 1950s, all larger programs had to contain logic for managing primary and secondary storage, such as overlaying. Virtual memory was therefore introduced not only to extend primary memory, but to make such an extension as easy as possible for programmers to use.[3] To allow for multiprogramming andmultitasking, many early systems divided memory between multiple programs without virtual memory, such as early models of the PDP-10 via registers.
The concept of virtual memory was first developed by German physicist Fritz-Rudolf Güntsch at the Technische Universität Berlin in 1956 in his doctoral thesis,Logical Design of a Digital Computer with Multiple Asynchronous Rotating Drums and Automatic High Speed Memory Operation; it described a machine with 6 100-word blocks of primary core memory and an address space of 1,000 100-word blocks, with hardware automatically moving blocks between primary memory and secondary drum memory.[4][5] Paging was first implemented at the University of Manchester as a way to extend the Atlas Computer's working memory by combining its 16 thousand words of primary core memory with an additional 96 thousand words of secondary drum memory. The first Atlas was commissioned in 1962 but working prototypes of paging had been developed by 1959.[3](p2)[6][7] In 1961, the Burroughs Corporation independently released the first commercial computer with virtual memory, the B5000, with segmentation rather than paging.
Before virtual memory could be implemented in mainstream operating systems, many problems had to be addressed. Dynamic address translation required expensive and difficult to build specialized hardware; initial implementations slowed down access to memory slightly.[3] There were worries that new system-wide algorithms utilizing secondary storage would be less effective than previously used application-specific algorithms. By 1969, the debate over virtual memory for commercial computers was over;[3] an IBM research team led by David Sayre showed that their virtual memory overlay system consistently worked better than the best manually controlled systems.[10] The first minicomputer to introduce virtual memory was the Norwegian NORD-1; during the 1970s, other minicomputers implemented virtual memory, notably VAX models running VMS.
Virtual memory was introduced to the x86 architecture with the protected mode of the Intel 80286 processor, but its segment swapping technique scaled poorly to larger segment sizes. The Intel 80386 introduced paging support underneath the existing segmentation layer, enabling the page fault exception to chain with other exceptions without double fault. However, loading segment descriptors was an expensive operation, causing operating system designers to rely strictly on paging rather than a combination of paging and segmentation.

Paged virtual memory


Page tables
Nearly all implementations of virtual memory divide a virtual address space into pages, blocks of contiguous virtual memory addresses. Pages on contemporary[NB 2]systems are usually at least 4 kilobytes in size; systems with large virtual address ranges or amounts of real memory generally use larger page sizes.
Page tables are used to translate the virtual addresses seen by the application into physical addresses used by the hardware to process instructions; such hardware that handles this specific translation is often known as the memory management unit. Each entry in the page table holds a flag indicating whether the corresponding page is in real memory or not. If it is in real memory, the page table entry will contain the real memory address at which the page is stored. When a reference is made to a page by the hardware, if the page table entry for the page indicates that it is not currently in real memory, the hardware raises a page fault exception, invoking the paging supervisor component of the operating system.
Systems can have one page table for the whole system, separate page tables for each application and segment, a tree of page tables for large segments or some combination of these. If there is only one page table, different applications running at the same time use different parts of a single range of virtual addresses. If there are multiple page or segment tables, there are multiple virtual address spaces and concurrent applications with separate page tables redirect to different real addresses.

Paging supervisor

This part of the operating system creates and manages page tables. If the hardware raises a page fault exception, the paging supervisor accesses secondary storage, returns the page that has the virtual address that resulted in the page fault, updates the page tables to reflect the physical location of the virtual address and tells the translation mechanism to restart the request.
When all physical memory is already in use, the paging supervisor must free a page in primary storage to hold the swapped-in page. The supervisor uses one of a variety of page replacement algorithms such as least recently used to determine which page to free.

Pinned pages

Operating systems have memory areas that are pinned (never swapped to secondary storage). Other terms used are lockedfixed, or wired pages. For example,interrupt mechanisms rely on an array of pointers to their handlers, such as I/O completion and page fault. If the pages containing these pointers or the code that they invoke were pageable, interrupt-handling would become far more complex and time-consuming, particularly in the case of page fault interruptions. Hence, some part of the page table structures is not pageable.
Some pages may be pinned for short periods of time, others may be pinned for long periods of time, and still others may need to be permanently pinned. For example:
  • The paging supervisor code and drivers for secondary storage devices on which pages reside must be permanently pinned, as otherwise paging wouldn't even work because the necessary code wouldn't be available.
  • Timing-dependent components may be pinned to avoid variable paging delays.
  • Data buffers that are accessed directly by peripheral devices that use direct memory access or I/O channels must reside in pinned pages while the I/O operation is in progress because such devices and the buses to which they are attached expect to find data buffers located at physical memory addresses; regardless of whether the bus has a memory management unit for I/O, transfers cannot be stopped if a page fault occurs and then restarted when the page fault has been processed.
In IBM's operating systems for System/370 and successor systems, the term is "fixed", and such pages may be long-term fixed, or may be short-term fixed, or may be unfixed (i.e., pageable). System control structures are often long-term fixed (measured in wall-clock time, i.e., time measured in seconds, rather than time measured in fractions of one second) whereas I/O buffers are usually short-term fixed (usually measured in significantly less than wall-clock time, possibly for tens of milliseconds). Indeed, the OS has a special facility for "fast fixing" these short-term fixed data buffers (fixing which is performed without resorting to a time-consuming Supervisor Call instruction).
Multics used the term "wired". OpenVMS and Windows refer to pages temporarily made nonpageable (as for I/O buffers) as "locked", and simply "nonpageable" for those that are never pageable.

Virtual-real operation

In OS/VS1 and similar OSes, some parts of systems memory are managed in "virtual-real" mode, called "V=R". In this mode every virtual address corresponds to the same real address. This mode is used for interrupt mechanisms, for the paging supervisor and page tables in older systems, and for application programs using non-standard I/O management. For example, IBM's z/OS has 3 modes (virtual-virtual, virtual-real and virtual-fixed).[11][page needed]

Thrashing

When paging and page stealing are used, a problem called "thrashing" can occur, in which the computer spends an unsuitably large amount of time transferring pages to and from a backing store, hence slowing down useful work. A task's working set is the minimum set of pages that should be in memory in order for it to make useful progress. Thrashing occurs when there is insufficient memory available to store the working sets of all active programs. Adding real memory is the simplest response, but improving application design, scheduling, and memory usage can help. Another solution is to reduce the number of active tasks on the system. This reduces demand on real memory by swapping out the entire working set of one or more processes.

Segmented virtual memory

Some systems, such as the Burroughs B5500,[12] use segmentation instead of paging, dividing virtual address spaces into variable-length segments. A virtual address here consists of a segment number and an offset within the segment. The Intel 80286 supports a similar segmentation scheme as an option, but it is rarely used. Segmentation and paging can be used together by dividing each segment into pages; systems with this memory structure, such as Multics and IBM System/38, are usually paging-predominant, segmentation providing memory protection.[13][14][15]
In the Intel 80386 and later IA-32 processors, the segments reside in a 32-bit linear, paged address space. Segments can be moved in and out of that space; pages there can "page" in and out of main memory, providing two levels of virtual memory; few if any operating systems do so, instead using only paging. Early non-hardware-assisted x86 virtualization solutions combined paging and segmentation because x86 paging offers only two protection domains whereas a VMM / guest OS / guest applications stack needs three.[16]:22 The difference between paging and segmentation systems is not only about memory division; segmentation is visible to user processes, as part of memory model semantics. Hence, instead of memory that looks like a single large space, it is structured into multiple spaces.
This difference has important consequences; a segment is not a page with variable length or a simple way to lengthen the address space. Segmentation that can provide a single-level memory model in which there is no differentiation between process memory and file system consists of only a list of segments (files) mapped into the process's potential address space.[17]
This is not the same as the mechanisms provided by calls such as mmap and Win32's MapViewOfFile, because inter-file pointers do not work when mapping files into semi-arbitrary places. In Multics, a file (or a segment from a multi-segment file) is mapped into a segment in the address space, so files are always mapped at a segment boundary. A file's linkage section can contain pointers for which an attempt to load the pointer into a register or make an indirect reference through it causes a trap. The unresolved pointer contains an indication of the name of the segment to which the pointer refers and an offset within the segment; the handler for the trap maps the segment into the address space, puts the segment number into the pointer, changes the tag field in the pointer so that it no longer causes a trap, and returns to the code where the trap occurred, re-executing the instruction that caused the trap.[18] This eliminates the need for a linker completely[3] and works when different processes map the same file into different places in their private address spaces.

Address space swapping

Some operating systems provide for swapping entire address spaces, in addition to whatever facilities they have for paging and segmentation. When this occurs, the OS writes those pages and segments currently in real memory to swap files. In a swap-in, the OS reads back the data from the swap files but does not automatically read back pages that had been paged out at the time of the swap out operation.
IBM's MVS, from OS/VS2 Release 2 through z/OS, provides for marking an address space as unswappable; doing so does not pin any pages in the address space. This can be done for the duration of a job by entering the name of an eligible[20] main program in the Program Properties Table with an unswappable flag. In addition, privileged code can temporarily make an address space unswappable With a SYSEVENT Supervisor Call instruction (SVC); certain changes[21] in the address space properties require that the OS swap it out and then swap it back in, using SYSEVENT TRANSWAP.