Longmont Data Recovery in Boulder Denver CO with Computer Physicians near me

Fast Affordable Data Recovery in Longmont, & Boulder Colorado with Computer Physicians

Fast Data Recovery in Longmont, CO

Our lives are intricately woven with technology, and the importance of data cannot be overstated. From precious memories to critical business information, the loss of data can be devastating. Fortunately, residents of Longmont, Boulder, and Erie Colorado, can breathe a sigh of relief knowing that Steve at Computer Physicians, LLC are at their service, providing reliable fast and low-cost data recovery solutions.

The Need for Data Recovery Services:

Data loss can occur due to a variety of reasons, including accidental deletion, hardware failure, or software issues. When faced with such a situation, it’s crucial to turn to professionals who specialize in data recovery. Computer Physicians, LLC in Longmont understand the urgency and sensitivity of data recovery needs, and they are equipped with the expertise to handle a wide range of scenarios.

Affordability Matters:

One of the standout features of Computer Physicians in Longmont is their commitment to providing low-cost fast data recovery services usually between 1- 3 days. Understanding that data loss can happen unexpectedly and can be financially burdensome, the team at Computer Physicians aims to make their services accessible to individuals and businesses alike. This commitment to affordability sets them apart in the realm of data recovery services.

State-of-the-Art Technology:

Computer Physicians leverage state-of-the-art technology to ensure the highest chances of successful data recovery. Their cutting-edge tools and techniques enable them to recover data from various storage devices, including hard drives, SSDs, USB drives, and more. Whether it’s a hardware malfunction or a logical issue, their skilled technicians employ advanced methods to retrieve your valuable data.

Personalized Approach:

Every data recovery situation is unique, and Computer Physicians understand the importance of a personalized approach. They take the time to assess each case individually, tailoring their strategies to the specific needs and circumstances of the customer. This attention to detail enhances the likelihood of a successful recovery while maintaining the integrity of the data.

Customer-Centric Service:

Beyond technical expertise, Computer Physicians prioritize excellent customer service. They recognize that dealing with data loss can be a stressful experience, and their team is dedicated to providing a supportive and transparent process. Clients are kept informed at every stage of the data recovery journey, ensuring peace of mind and building trust in the service.

In Longmont, Boulder and Erie Colorado, Computer Physicians stand as a beacon of hope for those facing data loss challenges. Their commitment to affordability, state-of-the-art technology, personalized approach, and customer-centric service make them a reliable partner in the realm of data recovery. When the unexpected happens, trust Steve at Computer Physicians to recover your valuable data fast, efficiently and cost-effectively.

Computer Physicians Computer Repair Data Recovery in Longmont Boulder Erie Denver Colorado Networking PC services help virus removal training

Maximizing Your Technology with Computer Physicians, LLC in Longmont, CO

As a business owner or individual in Longmont, Colorado, you rely on technology to help you stay productive and competitive. But when your computer or network experiences problems, it can quickly bring your operations to a halt. That’s where Computer Physicians, LLC comes in.

Computer Physicians, LLC is a leading IT computer repair and web design company in Longmont that offers a wide range of technology services to help you get back up and running as quickly as possible. Steve is an experienced professional dedicated to helping you maximize your technology investments and achieve your business goals.

We offer comprehensive computer repair services to help you resolve any technical issues you may be facing, whether it’s a simple software problem or a more complex hardware issue. We also offer web design services to help you create a professional and user-friendly online presence for your business. And, with our data recovery services, you can rest assured that your important files and information are safe and secure, even in the event of a disaster.

At Computer Physicians, LLC, we understand that technology can be complex and confusing, which is why we strive to provide clear and straightforward solutions. Steve is always available to answer your questions and help you understand your technology options, so you can make informed decisions.

We are committed to delivering high-quality services at an affordable price, and we always go the extra mile to ensure that our clients are completely satisfied with their experience. Whether you need help with computer repair, web design, or data recovery, we have the expertise and resources to get the job done.

So, if you’re looking for a trusted partner to help you maximize your technology investments in Longmont, look no further than Computer Physicians, LLC. Contact us today to schedule a consultation and find out how we can help you achieve your business goals.

Longmont Computer Physicians – User Accounts in Microsoft Windows

Longmont Computer Physicians learning teaching series. Computer Physicians of Longmont, Colorado will post an explanation of User Accounts in Microsoft Windows 10.

One thing you will definitely need to have to use a Windows computer is a user account. User accounts are required to make sure people are allowed to access the computer only if the owner wants them to. In order to use a Windows computer, you will need a user account that has been configured for you by an administrator or when you first set up your new computer. There are many reasons why Windows has user accounts, including the following: Having a way to protect their personal files from being accessed from others (unless they want them to be accessed);  Providing a way to assign permissions to shared files and folders on the local computer or network;  Determining what type of functions that person is allowed to perform on the computer itself;  Tracking things such as login times, failed login attempts, and file access using event logging;  Setting allowed times for users to be able to log onto a computer or network;  Saving the personal settings of your computer, such as your desktop background and installed printers etc.;  Assigning levels of access for software usage.  Keep in mind as a home user you won’t have to worry about most of these because your user account will mainly be used to save personalization settings that you customized for your user account and to keep your documents from being accessed by other users. As usual, Microsoft has given us a couple of ways to work with user accounts, and each way works a little differently, but we will get to that later on in this chapter. User Account Types There is more than one type of account for a Windows user, and this makes sense because different people need different levels of access and permissions. The two main types of user accounts that you will be dealing with are the standard user and the administrator.

Standard user accounts are for people who need to do everyday tasks on the computer such as run programs, go online, print, and so on. Standard users can also install and uninstall certain software as well. It’s usually a good idea to make everyone on your computer a standard user, and then if they need something done that requires higher privileges, they can have an administrator do it. And by administrator, I mean you! Administrator user accounts have full control over the computer and can do things such as install or uninstall any software, add or remove user accounts, add or remove hardware, and make changes that affect Windows itself. If you are logged in as a standard user and need to do something that requires administrator access, many times you will get prompted to enter the username and password of an administrator so you don’t need to actually log out and then back in as an administrator to get the job done.

Creating User Accounts With social media being all the rage and everyone and everything being connected to each other, Microsoft decided that it wanted to use what they call a Microsoft account to log into your computer with. This way whenever you log into another device with the same account, it will use many of the same settings for a universal experience each time. A Microsoft account uses an email address to login rather than a standard username. But if you are the type that likes to keep things old school (and simple), then you can still use a standard user type to log in with. Even if your computer was initially configured with a Microsoft account, you can convert it to a standard account pretty easily. I find that local accounts are much easier to troubleshoot when it comes to login problems. To view the user accounts on your computer, go to the Windows 10 Settings and click on Accounts and then on Family & other users. From this screen you will see your account and any other accounts configured on the computer.

Longmont Computer Physicians learning teaching series. Computer Physicians of Longmont, Colorado will post an explanation of User Accounts in Microsoft Windows 10.

Boulder Computer Repair

Computer Physicians loves Boulder! We are glad to be your full time Computer company in Boulder, CO. We have been in business since 1999. Our office is close by Boulder servicing Boulder regularly. Call us for a appointment in Boulder Colorado. Providing Computer Repair, upgrades, sales, installations, troubleshooting, networking, internet help, Virus removal, and training.

Erie Computer Repair in Erie, CO Colorado

We are glad to be your full time Computer company in Erie, CO Colorado. We have been in Erie, Colorado from 2003 to 2015. We are now close by in Longmont, CO still servicing Erie CO regularly. Call us for a appointment in Erie Colorado. Providing Computer Repair, upgrades, sales, installations, troubleshooting, networking, internet help, Virus removal, and training.

Microsoft SCAM Solved

I went to  fix a computer from a customer in Erie, Colorado who got scammed from someone that took over their computer on remote access saying they were from Microsoft.

Microsoft SCAM Erie, Colorado

I traced the steps.Very interesting what they did they use the command prompt to put fake commands in saying that hackers were infiltrating your system and they needed to pay money to fix the issue. They said they were from Microsoft and need to fix the problems created by the hackers.

There are no hackers they put fake messages in certain places where you check the system for errors. Here’s a printout of the Windows command prompt with  bogus information

People who are not technicians are fooled by this. but this is a command prompt this is not a error screen. That’s why it says it’s an unrecognized command Copying and pasting bogus error information in the command prompt you supposed to only be typing commands People get confused by this who don’t know about computers.

Saying that you must  install Microsoft services at $1.54 a piece 198 times for each service. Then they take the credit card information charge your credit card for that and God knows for what else. They also did other things working very fast having the customer do things on the computer to distract your attention and having a lot of pop-up screens. While taking over the computer with remote access.

Microsoft SCAM Fixed Erie Colorado

I was able to undo any damage they caused and get the computer back up and running like before.  So in the end I fixed the issue.  But people need to call Computer Physicians if they get a problem with their computer so that they don’t cause more issues or problems.  This hacker could have done worse if the customer did not call Longmont Computer Physicians to come solve the issue.

Boulder/Longmont Computer Repair – PC with no hard drive used

Longmont Colorado PC Computer not using it’s hard drive:

Computer Physicians, LLC  just worked on a unusual situation on a Zotac mini PC computer in Longmont, CO that had a boot windows drive that was filled up.  I thought this would be good to share with my readers:

This very small Zotac mini PC computer running Windows 10 home with 4GB of RAM was booting to a 64GB memory chip located on the motherboard and was not using the 300GB internal SATA hard drive.  As a result since the Windows OS was on a small 64GB memory chip it quickly got filled to capacity.  I backed up the customer’s data to an external hard drive.   The internal hard drive was not being used except for the storing of a few small files.   I could not clone the 64GB memory chip but was able to transfer the OS using special disk software.  I then needed to go into the BIOS and set the boot drive to the internal drive.  The computer is running  slower now since it is not using the small 64GB memory chip for windows and the CPU and computer itself is an inexpensive under-powered computer which was designed to run on the 64GB memory chip. The problem with this design is that the 64GB memory chip quickly gets filled to capacity.  (Windows 10 uses a lot of hard drive memory most systems have 1000GB or more)

I do not like this design and would not recommend this Zotac computer to a client.

The computer will run faster if the original drive is replaced with a solid state drive and if the OS can be transferred and if more RAM memory is installed.

These are some of the situations that Computer Physicians, LLC runs into.

-Steve

Longmont’s Newest Computer Viruses – Longmont/Boulder CO – Computer Physicians

Computer Repair Longmont, CO Virus removal. – Computer Physicians, LLC

Here is some news about the latest computer viruses out today that Computer Physicians in Longmont/Boulder, CO can help you with:

Technewsworld:

A new ransomware exploit dubbed “Petya” struck major companies and infrastructure sites this July 2017, following last month’s WannaCry ransomware attack, which wreaked havoc on more than 300,000 computers across the globe. Petya is believed to be linked to the same set of hacking tools as WannaCry.

Petya already has taken thousands of computers hostage, impacting companies and installations ranging from Ukraine to the U.S. to India. It has impacted a Ukrainian international airport, and multinational shipping, legal and advertising firms. It has led to the shutdown of radiation monitoring systems at the Chernobyl nuclear facility.

(more…)

Trends in PC technology – Computer Physicians Longmont/Boulder/Erie, CO

 https://www.computer-physicians.com/
Computer repair data recovery networking virus removal in Longmont/Boulder/Denver Colorado

 Here is a good article which talks about the changes in PC technology and the trends.

Past, Present and Future Trends in the Use
of Computers in Fisheries Research By
Bernard A. Megrey and Erlend Moksness
1.2 Hardware Advances
It is difficult not to marvel at how quickly computer technology advances. The
current typical desktop or laptop computer, compared to the original mono-
chrome 8 KB random access memory (RAM), 4 MHz 8088 microcomputer or
the original Apple II, has improved several orders of magnitude in many areas.
The most notable of these hardware advances are processing capability,
color graphics resolution and display technology, hard disk storage, and the
amount of RAM. The most remarkable thing is that since 1982, the cost of a
high-end microcomputer system has remained in the neighborhood of $US
3,000. This statement was true in 1982, at the printing of the last edition of
this book in 1996, and it holds true today.
1.2.1 CPUs and RAM
While we can recognize that computer technology changes quickly, this state-
ment does not seem to adequately describe what sometimes seems to be the
breakneck pace of improvements in the heart of any electronic computing
engine, the central processing unit (CPU). The transistor, invented at Bell
Labs in 1947, is the fundamental electronic component of the CPU chip. Higher
performance CPUs require more logic circuitry, and this is reflected in steadily
rising transistor densities. Simply put, the number of transistors in a CPU is a
rough measure of its computational power which is usually measured in floating
point mathematical operations per second (FLOPS). The more transistors there
are in the CPU, or silicon engine, the more work it can do.
Trends in transistor density over time, reveal that density typically doubles
approximately every year and a half according to a well know axiom known as
Moore’s Law. This proposition, suggested by Intel co-founder Gordon Moore
(Moore 1965), was part observation and part marketing prophesy. In 1965
Moore, then director of R&D at Fairchild Semiconductor, the first large-scale
producer of commercial integrated circuits, wrote an internal paper in which he
drew a line though five points representing the number of components per
integrated circuit for minimum cost for the components developed between
1959 and 1964
The prediction arising
from this observation became a self-fulfilling prophecy that emerged as one of
the driving principals of the semiconductor industry. As it related to computer
CPUs (one type of integrated circuit), Moore’s Law states that the number of
transistors packed into a CPU doubles every 18–24 months.
Figure 1.1 supports this claim. In 1979, the 8088 CPU had 29,000 transistors.
In 1997, the Pentium II had 7.5 million transistors, in 2000 the Pentium 4 had
420 million, and the trend continues so that in 2007, the Dual-Core Itanium 2
processor has 1.7 billion transistors. In addition to transistor density, data
1 Past, Present and Future Trends in the Use of Computers
) of CPU
performance. Note y-axis is on the log scale (Source: http://en.wikipedia.org/wiki/Teraflop,
accessed 12 January 2008)
1 Past, Present and Future Trends in the Use of Computers
5
Manufacturing technology appears to be reaching its limits in terms of how
dense silicon chips can be manufactured – in other words, how many transistors
can fit onto CPU chips and how fast their internal clocks can be run. As stated
recently in the BBC News, ‘‘The industry now believes that we are approaching
the limits of what classical technology – cla
ssical being as refined over the last 40
years – can do.’’ There is a problem with making microprocessor
circuitry smaller. Power leaks, the unwan
ted leakage of electricity or electrons
between circuits packed ever closer toget
her, take place. Overheating becomes a
problem as processor architecture gets ever smaller and clock speeds increase.
Traditional processors have one processing engine on a chip. One method
used to increase performance through higher transistor densities, without
increasing clock speed, is to put more than one CPU on a chip and to allow
them to independently operate on different tasks (called threads). These
advanced chips are called multiple-core processors. A dual-core processor
squeezes two CPU engines onto a single chip. Quad-core processors have four
engines. Multiple-core chips are all 64-bit meaning that they can work through
64 bits of data per instruction. That is twice rate of the current standard 32-bit
processor. A dual-core processor theoretically doubles your computing power
since a dual-core processor can handle two threads of data simultaneously. The
result is there is less waiting for tasks to complete. A quad-core chip can handle
four threads of data.
Progress marches on. Intel announced in February 2007 that it had a
prototype CPU that contains 80 processor cores and is capable of 1 teraflop
(10
12
floating point operations per second) of processing capacity. The potential
uses of a desktop fingernail-sized 80-core chip with supercomputer-like perfor-
mance will open unimaginable opportunities (Source: http://www.intel.com/
pressroom/archive/releases/20070204comp.htm, accessed 12 January 2008).
As if multiple core CPUs were not powerful enough, new products being
developed will feature ‘‘dynamically scalable’’ architecture, meaning that vir-
tually every part of the processor – including cores, cache, threads, interfaces,
and power – can be dynamically allocated based on performance, power and
thermal requirements.
Supercomputers may
soon be the same size as a laptop if IBM brings to the market silicon nanopho-
tonics. In this new technology, wires on a chip are replaced with pulses of light
on tiny optical fibers for quicker and more power-efficient data transfers
between processor cores on a chip. This new technology is about 100 times
faster, consumes one-tenth as much power, and generates less heat (
Multi-core processors pack a lot of power. There is just one problem: most
software programs are lagging behind hardware improvements. To get the most
out of a 64-bit processor, you need an operating system and application
programs that support it. Unfortunately, as of the time of this writing, most
software applications and operating systems are not written to take advantage
of the power made available with multiple cores. Slowly this will change.
Currently there are 64-bit versions of Linux, Solaris, and Windows XP, and
Vista. However, 64-bit versions of most device drivers are not available, so for
today’s uses, a 64-bit operating system can become frustrating due to a lack of
available drivers.
Another current developing trend is building high performance computing
environments using computer clusters, which are groups of loosely coupled
computers, typically connected together through fast local area networks.
A cluster works together so that multiple processors can be used as though
they are a single computer. Clusters are usually deployed to improve perfor-
mance over that provided by a single computer, while typically being much less
expensive than single computers of comparable speed or availability.
Beowulf is a design for high-performance parallel computing clusters using
inexpensive personal computer hardware. It was originally developed by
NASA’s Thomas Sterling and Donald Becker. The name comes from the
main character in the Old English epic poem Beowulf.
A Beowulf cluster of workstations is a group of usually identical PC com-
puters, configured into a multi-computer architecture, running a Open Source
Unix-like operating system, such as BSD or
Solaris They are joined into a small network and have libraries and
programs installed that allow processing to be shared among them. The server
node controls the whole cluster and serves files to the client nodes. It is also the
cluster’s console and gateway to the outside world. Large Beowulf machines
might have more than one server node, and possibly other nodes dedicated to
particular tasks, for example consoles or monitoring stations. Nodes are con-
figured and controlled by the server node, and do only what they are told to do
in a disk-less client configuration.
There is no particular piece of software that defines a cluster as a Beowulf.
Commonly used parallel processing libraries include Message Passing Interface;
(Both of these permit the programmer to divide a task among a group of
networked computers, and recollect the results of processing. Software must
be revised to take advantage of the cluster. Specifically, it must be capable of
performing multiple independent parallel operations that can be distributed
among the available processors. Microsoft also distributes a Windows Compute
Cluster Server 2003 (Source: http://www.microsoft.com/windowsserver2003/ccs/
default.aspx, accessed 12 January 2008) to facilitate building a high-performance
computing resource based on Microsoft’s Windows platforms.
One of the main differences between Beowulf and a cluster of workstations is
that Beowulf behaves more like a single machine rather than many worksta-
tions.
Past, Present and Future Trends in the Use of Computers
CPU + memory package which can be plugged into the
cluster, just like a CPU or memory module can be plugged into a motherboard.
(Source: http://en.wikipedia.org/wiki/Beowulf_(computing), accessed 12 January
2008). Beowulf systems are now deployed worldwide, chiefly in support of
scientific computing and their use in fisheries applications is increasing. Typical
configurations consist of multiple machines built on AMD’s Opteron 64-bit
and/or Athlon X2 64-bit processors.
Memory is the most readily accessible large-volume storage available to the
CPU. We expect that standard RAM configurations will continue to increase as
operating systems and application software become more full-featured and
demanding of RAM. For example, the ‘‘recommended’’ configuration for
Windows Vista Home Premium Edition and Apple’s new Leopard operating
systems is 2 GB of RAM, 1 GB to hold the operating system leaving 1 GB for
data and application code. In the previous edition, we predicted that in 3–5
years (1999–2001) 64–256 megabytes (MB) of Dynamic RAM will be available
and machines with 64 MB of RAM will be typical. This prediction was incred-
ibly inaccurate. Over the years, advances in semiconductor fabrication technol-
ogy have made gigabyte memory configurations not only a reality, but
commonplace.
Not all RAM performs equally. Newer types, called double data rate RAM
(DDR) decrease the time in takes for the CPU to communicate with memory,
thus speeding up computer execution. DDR comes in several flavors. DDR has
been around since 2000 and is sometimes called DDR1. DDR2 was introduced
in 2003. It took a while for DDR2 to reach widespread use, but you can find it in
most new computers today. DDR3 began appearing in mid-2007. RAM simply
holds data for the processor. However, there is a cache between the processor
and the RAM: the L2 cache. The processor sends data to this cache. When the
cache overflows, data are sent to the RAM. The RAM sends data back to the L2
cache when the processor needs it. DDR RAM transfers data twice per clock
cycle. The clock rate, measured in cycles per second, or hertz, is the rate at which
operations are performed. DDR clock speeds range between 200 MHz (DDR-
200) and 400 MHz (DDR-400). DDR-200 transfers 1,600 megabits per second
(Mb s) while DDR-400 transfers 3,200 MB s

DDR2 RAM is
twice as fast as DDR RAM. The bus carrying data to DDR2 memory is twice as
fast. That means twice as much data are carried to the module for each clock
cycle. DDR2 RAM also consumes less power than DDR RAM. DDR2 speeds
range between 400 MHz (DDR2-400) and 800 MHz (DDR2-800). DDR2-400
transfers 3,200 MB s

1
. DDR2-800 transfers 6,400 MB s

1
.DDR3RAM
is twice as fast as DDR2 RAM, at least in theory. DDR3 RAM is more power-
efficient than DDR2 RAM. DDR3 speeds range between 800 MHz (DDR3-800)
and 1,600 MHz (DDR3-1600). DDR3-800 transfers 6,400 MB s

1
;DDR3-1600
transfers 12,800 MB s

1
.
As processors increased in performance, the addressable memory space also
increased as the chips evolved from 8-bit to 64-bit. Bytes of data readily
8
B.A. Megrey and E. Moksness
accessible to the processor are identified by a memory address, which by
convention starts at zero and ranges to the upper limit addressable by the pro-
cessor. A 32-bit processor typically uses memory addresses that are 32 bits wide.
The 32-bit wide address allows the processor to address 2
32
bytes (B) of memory,
which is exactly 4,294,967,296 B, or 4 GB. Desktop machines with a gigabyte of
memory are common, and boxes configured with 4 GB of physical memory are
easily available. While 4 GB may seem like a lot of memory, many scientific
databases have indices that are larger. A 64-bit wide address theoretically allows
18 million terabytes of addressable memory (1.8 10
19
B). Realistically 64-bit
systems will typically access approximately 64 GB of memory in the next 5 years.
1.2.2 Hard Disks and Other Storage Media
Improvements in hard disk storage, since our last edition, have advanced as well.
One of the most amazing things about hard disks is that they both change and
don’t change more than most other components. The basic design of today’s
hard disks is not very different from the original 5¼’’ 10 MB hard disk that was
installed in the first IBM PC/XTs in the early 1980s. However, in terms of
capacity, storage, reliability and other characteristics, hard drives have substan-
tially improved, perhaps more than any other PC component behind the CPU.
Seagate, a major hard drive manufacturer, estimates that drive capacity increases
by roughly 60% per year (Source: http://news.zdnet.co.uk/communications/
0,100,0000085,2067661,00.htm, accessed 12 January 2008).
Some of the trends in various important hard disk characteristics (Source:
http://www.PCGuide.com, accessed 12 January 2008) are described below. The
areal density of data on hard disk platters continues to increase at an amazing
rate even exceeding some of the optimistic predictions of a few years ago.
Densities are now approaching 100 Gbits in

2
, and modern disks are now packing
as much as 75 GB of data onto a single 3.5 in platter (Source: http://www.
fujitsu.com/downloads/MAG/vol42-1/paper08.pdf, accessed 12 January 2008).
Hard disk capacity continues to not only increase, but increase at an accelerat-
ing rate. The rate of technology development, measured in data areal density
growth is about twice that of Moore’s law for semiconductor transistor
density (Source: http://www.tomcoughlin.com/Techpapers/head&medium.pdf,
accessed 12 January 2008).
The trend towards larger and larger capacity drives will continue for both
desktops and laptops. We have progressed from 10 MB in 1981 to well over
10 GB in 2000. Multiple terabyte (1,000 GB) drives are already available. Today
the standard for most off the shelf laptops is around 120–160 GB. There is also a
move to faster and faster spindle speeds. Since increasing the spindle speed
improves both random-access and sequential performance, this is likely to
continue. Once the domain of high-end SCSI drives (Small Computer System
Interface), 7,200 RPM spindles are now standard on mainstream desktop and
1 Past, Present and Future Trends in the Use of Computers
9
notebook hard drives, and a 10,000 and 15,000 RPM models are beginning to
appear. The trend in size or form factor is downward: to smaller and smaller
drives. 5.25 in drives have now all but disappeared from the mainstream PC
market, with 3.5 in drives dominating the desktop and server segment. In the
mobile world, 2.5 in drives are the standard with smaller sizes becoming more
prevalent. IBM in 1999 announced its
Microdrive
which is a tiny 1 GB or device
only an inch in diameter and less than 0.25 in thick. It can hold the equivalent of
700 floppy disks in a package as small as 24.2 mm in diameter. Desktop and
server drives have transitioned to the 2.5 in form factor as well, where they are
used widely in network devices such as storage hubs and routers, blade servers,
small form factor network servers and RAID (Redundant Arrays of Inexpen-
sive Disks) subsystems. Small 2.5 in form factor (i.e. ‘‘portable’’) high perfor-
mance hard disks, with capacities around 250 GB, and using the USB 2.0
interface are becoming common and easily affordable. The primary reasons
for this ‘‘shrinking trend’’ include the enhanced rigidity of smaller platters.
Reduction in platter mass enables faster spin speeds and improved reliability
due to enhanced ease of manufacturing. Both positioning and transfer perfor-
mance factors are improving. The speed with which data can be pulled from the
disk is increasing more rapidly than positioning performance is improving,
suggesting that over the next few years addressing seek time and latency will
be the areas of greatest attention to hard disk engineers. The reliability of hard
disks is improving slowly as manufacturers refine their processes and add new
reliability-enhancing features, but this characteristic is not changing nearly as
rapidly as the others above. One reason is that the technology is constantly
changing, and the performance envelope is constantly being pushed; it’s much
harder to improve the reliability of a product when it is changing rapidly.
Once the province of high-end servers, the use of multiple disk arrays
(RAIDs) to improve performance and reliability is becoming increasingly
common, and multiple hard disks configured as an array are now frequently
seen in consumer desktop machines. Finally, the interface used to deliver data
from a hard disk has improved as well. Despite the introduction to the PC world
of new interfaces such as IEEE-1394 (FireWire) and USB (universal serial bus)
the mainstream interfaces in the PC world are the same as they were through the
1990s: IDE/ATA/SATA and SCSI. These interfaces are all going through
improvements. A new external SATA interface (eSATA) is capable of transfer
rates of 1.5–3.0 Gbits s

1
. USB transfers data at 480 Mbits s

1
and Firewire is
available in 400 and 800 Mbits s

1
. USB 3.0 has been announced and it will
offer speeds up to 4.8 Gbits s

1
. Firewire will also improve to increases in the
range of 3.2 Gbits s

1
. The interfaces will continue to create new and improved
standards with higher data transfer rates to match the increase in performance
of the hard disks themselves.
In summary, since 1996, faster spindle speeds, smaller form factors, multiple
double-sided platters coated with higher density magnetic coatings, and
improved recording and data interface technologies, have substantially
increased hard disk storage and performance. At the same time, the price per unit of storage has decreased.