Friday 16 September 2011



YouTube adds a built-in video editor

In an effort to make it easier for YouTube users to make changes to their videos after uploading them, YouTube is rolling out a brand new video editor.
No, this isn't the standalone video editor meant for splicing together clips from multiple videos that's been available in the service's TestTube labs since last year. Instead, it's a new one designed to give users a way to do quick fixes without having to re-upload the video. It's like a retouching tool for photos, but for non-commercial video.
The idea for including an editor came out of eyeballing videos that had been uploaded to the service, YouTube product manager Jason Toff told CNET.
"We noticed a lot of the videos that were uploaded to YouTube could use some polish, some basic video editing," Toff said. "We noticed a lot of videos that had extra footage at the beginning that could have been trimmed off, or some footage at the end that could be trimmed off, a lot of videos that were really shaky and could use stabilization, and dark videos, etc."
The answer is the new tool, which lets users make both quick fixes and more substantial edits to their videos.
YouTube's new visual effects filters, done in a collaboration with Google-owned Picnik.
YouTube's new visual effects filters, done in a collaboration with Google-owned Picnik.
(Credit: YouTube)
The quick fixes menu includes basic changes like rotating a video, increasing the fill light to brighten up a dark shot, and adjusting contrast, color temperature, and saturation. Also included are tools to trim the beginning and end of a clip, and stabilize a shaky video--something YouTube introduced to its other editor in March. And it wouldn't be Google if there wasn't an "I'm feeling lucky" button. This does a quick analysis of the video and tweaks its color, brightness, and contrast settings automatically.
The quick fixes menu, including the "I'm feeling lucky" button.
The quick fixes menu, including the "I'm feeling lucky" button.
(Credit: YouTube)
What users are likely to latch onto, though, are the new effects tools. YouTube's included 14 effects, which have been developed as part of a collaboration with Picnik, the Web-based photo editing service Google acquired last year. These presets can do things like turn your video into black and white, or make your video look like it was shot on a vintage camera, even if it wasn't. Such effects features are commonplace in desktop video editors, but a standout for a tool that's browser-based.
Lastly, Google's included a quick way to swap out the audio with one of its licensed tracks, something that's long been available, but now sits alongside the other editing tools in a new editor. Here users get the same selection of royalty-free music they'd find on the standalone tool, available if they want to write over any existing video from the clip.
A big change to come with the new editor is that YouTube is letting people make these edits while preserving the original video and any social interaction it's had. That means if you change your mind later on down the road, you can come back and undo it, and YouTube will maintain the video's URL and ID, as well as its view count and comments. There is some limitation over that, however. If a video has more than 1,000 views, any changes (even if it's just a quick lighting fix) require saving the edited version as a new video.
One of the most interesting tidbits in all of this is the fact that YouTube collaborated with Picnik for the filtering effects. It hints that YouTube may, one day, toy with adding premium filters to its editing tools to sell as part of a subscription service, as Picnik does with its own photo filters and effects. As to whether that was in the cards, A YouTube spokesman said simply that the company was "eager to hear feedback on this launch," and that there were no additional announcements about feature updates.
Google says it's rolling out the feature to users this afternoon. In the meantime, you can see a demo video of how the new features work, which I've embedded below







Solar-Powered Bulb Provides Light 

After Dark

A solar-powered light bulb may sound like an oxymoron (what’s the point of a lightbulb that only works when the sun’s out?), but a company called Nokero has a prototype in the works that will charge a battery in the light bulb, making it useful after sundown.
Denver-based Nokero, short for No Kerosene, hopes to offer a safe light source to the millions living without a reliable energy supply. Common non-electric light sources such as candles, charcoal, wood and kerosene are a major health threat when regularly used indoors because of the fumes they produce.


nokero main
A solar-powered light bulb may sound like an oxymoron (what’s the point of a lightbulb that only works when the sun’s out?), but a company called Nokero has a prototype in the works that will charge a battery in the light bulb, making it useful after sundown.
Denver-based Nokero, short for No Kerosene, hopes to offer a safe light source to the millions living without a reliable energy supply. Common non-electric light sources such as candles, charcoal, wood and kerosene are a major health threat when regularly used indoors because of the fumes they produce.
Candles and kerosene are also often relatively expensive to attain and Nokero estimates up to 20% of a family’s income in places without reliable electricity can go to purchasing candles and lighting fuel. Nokero hopes to provide an affordable and lung-friendly alternative.  Priced around $20 and reducing the need for fuels, the company says its bulb begins saving most families money within 3-8 weeks.
The company’s design comes in the form of a lantern that can be hung or placed on a table. The N200 model bulb contains four LEDs and is charged through an embedded solar panel connected to a NiMH AA size battery with a two-year lifespan. The power switch on the back of the bulb can also change the intensity of the light, from high to low, and the bulb itself is made from a durable polycarbonate similar to that used in automobile headlights.
The company is partnering with non-profits to help distribute the bulbs both nationally, such as to off-grid citizens of Navajo Nation, and internationally to countries including Haiti, Pakistan, Mexico and Japan.

Wednesday 11 May 2011

PLANET C COMPUTERS,PIMPALGAON BASWANT,NASHIK,MAHARASHTRA

PLANET C COMPUTERS,PIMPALGAON BASWANT,NASHIK,MAHARASHTRA

Monday 10 November 2008

PLANET 'C' COMPUTERS

















PLANET 'C' COMPUTERS, BABA COMPLE, OPP STATE BANK, PIMPALGAON BASWANT, NASHIK


WE ARE LEADING INSTITUTAION & CYBER CAFE CHAIN IN PIMPALGAON BASWANT REGION.


WE HAVE PLANNING IN FUTURE WE CAN MADE MANY BRACHES AROUND NASHIK DISTRIC.


HISTORY OF COMPUTER
The history of computing hardware encompasses the hardware, its architecture, and its impact on software. The elements of computing hardware have undergone significant improvement over their history. This improvement has triggered worldwide use of the technology, performance has improved and the price has declined.[1] Computers are accessible to ever-increasing sectors of the world's population.[2] Computing hardware has become a platform for uses other than computation, such as automation, communication, control, entertainment, and education. Each field in turn has imposed its own requirements on the hardware, which has evolved in response to those requirements.[3]
The von Neumann architecture unifies our current computing hardware implementations.[4] Since digital computers rely on digital storage, and tend to be limited by the size and speed of memory, the history of computer data storage is tied to the development of computers. The major elements of computing hardware implement abstractions: input,[5] output,[6] memory,[7] and processor. A processor is composed of control[8] and datapath.[9] In the von Neumann architecture, control of the datapath is stored in memory. This allowed control to become an automatic process; the datapath could be under software control, perhaps in response to events. Beginning with mechanical datapaths such as the abacus and astrolabe, the hardware first started using analogs for a computation, including water and even air as the analog quantities: analog computers have used lengths, pressures, voltages, and currents to represent the results of calculations.[10] Eventually the voltages or currents were standardized, and then digitized. Digital computing elements have ranged from mechanical gears, to electromechanical relays, to vacuum tubes, to transistors, and to integrated circuits, all of which are currently implementing the von Neumann architecture.[11

A computer is a machine that manipulates data according to a list of instructions.
The first devices that resemble modern computers date to the mid-20th century (1940–1945), although the computer concept and various machines similar to computers existed earlier. Early electronic computers were the size of a large room, consuming as much power as several hundred modern personal computers (PC).[1] Modern computers are based on tiny integrated circuits and are millions to billions of times more capable while occupying a fraction of the space.[2] Today, simple computers may be made small enough to fit into a wristwatch and be powered from a watch battery. Personal computers, in various forms, are icons of the Information Age and are what most people think of as "a computer"; however, the most common form of computer in use today is the embedded computer. Embedded computers are small, simple devices that are used to control other devices—for example, they may be found in machines ranging from fighter aircraft to industrial robots, digital cameras, and children's toys.
The ability to store and execute lists of instructions called programs makes computers extremely versatile and distinguishes them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a certain minimum capability is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, computers with capability and complexity ranging from that of a personal digital assistant to a supercomputer are all able to perform the same computational tasks given enough time and storage capacity.

COMPUTER

computer device capable of performing a series of arithmetic or logical operations. A computer is distinguished from a calculating machine, such as an electronic calculator , by being able to store a computer program (so that it can repeat its operations and make logical decisions), by the number and complexity of the operations it can perform, and by its ability to process, store, and retrieve data without human intervention. Computers developed along two separate engineering paths, producing two distinct types of computer—analog and digital. An analog computer operates on continuously varying data; a digital computer performs operations on discrete data. Computers are categorized by both size and the number of people who can use them concurrently. Supercomputers are sophisticated machines designed to perform complex calculations at maximum speed; they are used to model very large dynamic systems, such as weather patterns. Mainframes, the largest and most powerful general-purpose systems, are designed to meet the computing needs of a large organization by serving hundreds of computer terminals at the same time. Minicomputers, though somewhat smaller, also are multiuser computers, intended to meet the needs of a small company by serving up to a hundred terminals. Microcomputers, computers powered by a microprocessor , are subdivided into personal computers and workstations, the latter typically incorporating RISC processors . Although microcomputers were originally single-user computers, the distinction between them and minicomputers has blurred as microprocessors have become more powerful. Linking multiple microcomputers together through a local area network or by joining multiple microprocessors together in a parallel-processing system has enabled smaller systems to perform tasks once reserved for mainframes, and the techniques of grid computing have enabled computer scientists to utilize the unemployed processing power of connected computers. Advances in the technology of integrated circuits have spurred the development of smaller and more powerful general-purpose digital computers. Not only has this reduced the size of the large, multi-user mainframe computers—which in their early years were large enough to walk through—to that of large pieces of furniture, but it has also made possible powerful, single-user personal computers and workstations that can sit on a desktop. These, because of their relatively low cost and versatility, have largely replaced typewriters in the workplace and rendered the analog computer inefficient. Analog Computers An analog computer represents data as physical quantities and operates on the data by manipulating the quantities. It is designed to process data in which the variable quantities vary continuously (see analog circuit ); it translates the relationships between the variables of a problem into analogous relationships between electrical quantities, such as current and voltage, and solves the original problem by solving the equivalent problem, or analog, that is set up in its electrical circuits. Because of this feature, analog computers were especially useful in the simulation and evaluation of dynamic situations, such as the flight of a space capsule or the changing weather patterns over a certain area. The key component of the analog computer is the operational amplifier , and the computer's capacity is determined by the number of amplifiers it contains (often over 100). Although analog computers are commonly found in such forms as speedometers and watt-hour meters, they largely have been made obsolete for general-purpose mathematical computations and data storage by digital computers. Digital Computers A digital computer is designed to process data in numerical form (see digital circuit ); its circuits perform directly the mathematical operations of addition, subtraction, multiplication, and division. The numbers operated on by a digital computer are expressed in the binary system ; binary digits, or bits, are 0 and 1, so that 0, 1, 10, 11, 100, 101, etc., correspond to 0, 1, 2, 3, 4, 5, etc. Binary digits are easily expressed in the computer circuitry by the presence (1) or absence (0) of a current or voltage. A series of eight consecutive bits is called a "byte" ; the eight-bit byte permits 256 different "on-off" combinations. Each byte can thus represent one of up to 256 alphanumeric characters, and such an arrangement is called a "single-byte character set" (SBCS); the de facto standard for this representation is the extended ASCII character set. Some languages, such as Japanese, Chinese, and Korean, require more than 256 unique symbols. The use of two bytes, or 16 bits, for each symbol, however, permits the representation of up to 65,536 characters or ideographs. Such an arrangement is called a "double-byte character set" (DBCS); Unicode is the international standard for such a character set. One or more bytes, depending on the computer's architecture, is sometimes called a digital word; it may specify not only the magnitude of the number in question, but also its sign (positive or negative), and may also contain redundant bits that allow automatic detection, and in some cases correction, of certain errors (see code ; information theory ). A digital computer can store the results of its calculations for later use, can compare results with other data, and on the basis of such comparisons can change the series of operations it performs. Digital computers are used for reservations systems, scientific investigation, data-processing and word-processing applications, desktop publishing , electronic games , and many other purposes. Processing of Data The operations of a digital computer are carried out by logic circuits , which are digital circuits whose single output is determined by the conditions of the inputs, usually two or more. The various circuits processing data in the computer's interior must operate in a highly synchronized manner; this is accomplished by controlling them with a very stable oscillator , which acts as the computer's "clock." Typical computer clock rates range from several million cycles per second to several hundred million, with some of the fastest computers having clock rates of about a billion cycles per second. Operating at these speeds, digital computer circuits are capable of performing thousands to trillions of arithmetic or logic operations per second, thus permitting the rapid solution of problems that would be impossible for a human to solve by hand. In addition to the arithmetic and logic circuitry and a small number of registers (storage locations that can be accessed faster than main storage and are used to hold the intermediate results of calculations), the heart of the computer—called the central processing unit, or CPU—contains the circuitry that decodes the set of instructions, or program, and causes it to be executed. Storage and Retrieval of Data Associated with the central processing unit is the storage unit, or memory, where results or other data are stored for periods of time ranging from a small fraction of a second to days or weeks before being retrieved for further processing. Once made up of vacuum tubes and later of small doughnut-shaped ferromagnetic cores strung on a wire matrix, main storage now consists of integrated circuits , each of which contains thousands of semiconductor devices. Where each vacuum tube or core represented one bit and the total memory of the computer was measured in thousands of bytes (or kilobytes, KB), each semiconductor device now represents millions of bytes (or megabytes, MB) and the total memory of mainframe computers is measured in billions of bytes (or gigabytes, GB). Random-access memory (RAM), which both can be read from and written to, is lost each time the computer is turned off. Read-only memory (ROM), which cannot be written to, maintains its content at all times and is used to store the computer's control information. Programs and data that are not currently being used in main storage can be saved on auxiliary storage, or external storage. Although punched paper tape and punched cards once served this purpose, the major materials used today are magnetic tape and magnetic disks, which can be read from and written to, and two types of optical disks , the compact disc (CD) and its successor the digital versatile disc (DVD). DVD is an improved optical storage technology capable of storing vastly greater amounts of data than the CD technology. CD-Read-Only Memory (CD-ROM) and DVD-Read-Only Memory (DVD-ROM) disks can only be read—the disks are impressed with data at the factory but once written cannot be erased and rewritten with new data. The latter part of the 1990s saw the introduction of new optical storage technologies: CD-Recordable (CD-R) and DVD-Recordable (DVD-R), optical disks that can be written to by the computer to create a CD-ROM or DVD-ROM, but can be written to only once; and CD-ReWritable (CD-RW), DVD-ReWritable (DVD-RW and DVD+RW), and DVD-Random Access Memory (DVD-RAM), disks that can be written to multiple times. When compared to semiconductor memory, magnetic and optical storage is less expensive, is not volatile (i.e., data is not lost when the power to the computer is shut off), and provides a convenient way to transfer data from one computer to another. Thus operating instructions or data output from one computer can be stored away from the computer and then retrieved either by the same computer or another. In a system using magnetic tape the information is stored by a specially designed tape recorder somewhat similar to one used for recording sound. In magnetic and optical disk systems the principle is the same except that the magnetic or optical medium lies in a path, or track, on the surface of a disk. The disk drive also contains a motor to spin the disk and a magnetic or optical head or heads to read and write the data to the disk. Drives take several forms, the most significant difference being whether the disk can be removed from the drive assembly. Removable magnetic disks are most commonly made of mylar enclosed in a paper or plastic holder. These floppy disks have varying capacities, with very high density disks holding 250 MB—more than enough to contain a dozen books the size of Tolstoy's Anna Karenina. Compact discs can hold many hundreds of megabytes, and are used, for example, to store the information contained in an entire multivolume encyclopedia or set of reference works, and DVD disks can hold ten times as much as that. Nonremovable disks are made of metal and arranged in spaced layers. They can hold more data and can read and write data much faster than floppies. Data are entered into the computer and the processed data made available via input/output devices. All auxiliary storage devices are used as input/output devices. For many years, the most popular input/output medium was the punched card. Although this is still used, the most popular input device is now the computer terminal and the most popular output device is the high-speed printer . Human beings can directly communicate with the computer through computer terminals, entering instructions and data by means of keyboards much like the ones on typewriters, by using a pointing device such as a mouse, trackball, or touchpad, or by speaking into a microphone that is connected to computer running voice-recognition software. Responses may be displayed on a cathode-ray tube , liquid-crystal display, or printer. The CPU, main storage, auxiliary storage, and input/output devices collectively make up a system. Sharing the Computer's Resources Generally, the slowest operations that a computer must perform are those of transferring data, particularly when data is received from or delivered to a human being. The computer's central processor is idle for much of this period, and so two similar techniques are used to use its power more fully. Time sharing, used on large computers, allows several users at different terminals to use a single computer at the same time. The computer performs part of a task for one user, then suspends that task to do part of another for another user, and so on. Each user only has the computer's use for a fraction of the time, but the task switching is so rapid that most users are not aware of it. Most of the tens of millions of computers in the world are stand-alone, single-user devices known variously as personal computers or workstations. For them, multitasking involves the same type of switching, but for a single user. This permits a user, for example, to have one file printed and another sorted while editing a third in a word-processing session. Such personal computers can also be linked together in a network, where each computer is connected to others, usually by wires or coaxial cables, permitting all to share resources such as printers, modems , and hard-disk storage devices. Computer Programs and Programming Languages Before a computer can be used to solve a given problem, it must first be programmed, that is, prepared for solving the problem by being given a set of instructions, or program. The various programs by which a computer controls aspects of its operations, such as those for translating data from one form to another, are known as software, as contrasted with hardware, which is the physical equipment comprising the installation. In most computers the moment-to-moment control of the machine resides in a special software program called an operating system, or supervisor. Other forms of software include assemblers and compilers for programming languages and applications for business and home use (see computer program ). Software is of great importance; the usefulness of a highly sophisticated array of hardware can be severely compromised by the lack of adequate software. Each instruction in the program may be a simple, single step, telling the computer to perform some arithmetic operation, to read the data from some given location in the memory, to compare two numbers, or to take some other action. The program is entered into the computer's memory exactly as if it were data, and on activation, the machine is directed to treat this material in the memory as instructions. Other data may then be read in and the computer can carry out the program to solve the particular problem. Since computers are designed to operate with binary numbers, all data and instructions must be represented in this form; the machine language, in which the computer operates internally, consists of the various binary codes that define instructions together with the formats in which the instructions are written. Since it is time-consuming and tedious for a programmer to work in actual machine language, a programming language , or high-level language, designed for the programmer's convenience, is used for the writing of most programs. The computer is programmed to translate this high-level language into machine language and then solve the original problem for which the program was written. Certain high-level programming languages are universal, varying little from machine to machine. Development of Computers Although the development of digital computers is rooted in the abacus and early mechanical calculating devices, Charles Babbage is credited with the design of the first modern computer, the "analytical engine," during the 1830s. American scientist Vannevar Bush built a mechanically operated device, called a differential analyzer, in 1930; it was the first general-purpose analog computer. John Atanassoff constructed the first semielectronic digital computing device in 1939. The first fully automatic calculator was the Mark I, or Automatic Sequence Controlled Calculator, begun in 1939 at Harvard by Howard Aiken, while the first all-purpose electronic digital computer, ENIAC (Electronic Numerical Integrator And Calculator), which used thousands of vacuum tubes, was completed in 1946 at the Univ. of Pennsylvania. UNIVAC (UNIVersal Automatic Computer) became (1951) the first computer to handle both numeric and alphabetic data with equal facility; this was the first commercially available computer. First-generation computers were supplanted by the transistorized computers (see transistor ) of the late 1950s and early 60s, second-generation machines that were smaller, used less power, and could perform a million operations per second. They, in turn, were replaced by the third-generation integrated-circuit machines of the mid-1960s and 1970s that were even smaller and were far more reliable. The 1980s and 90s were characterized by the development of the microprocessor and the evolution of increasingly smaller but powerful computers, such as the personal computer and personal digital assistant , which ushered in a period of rapid growth in the computer industry. Bibliography See S. G. Nash, A History of Scientific Computing (1990); D. I. A. Cohen, Introduction to Computer Theory (2d ed. 1996); P. Norton, Peter Norton's Introduction to Computers (2d ed. 1996); A. W. Biermann, Great Ideas in Computer Science: A Gentle Introduction (2d ed. 1997); R. L. Oakman, The Computer Triangle: Hardware, Software, People (2d ed. 1997); R. Maran, Computers Simplified (4th ed. 1998); A. S. Tanenbaum and J. R. Goodman. Structured Computer Organization (4th ed. 1998).