APLawrence.com -  Resources for Unix and Linux Systems, Bloggers and the self-employed

file system fragmentation defragmentation linux

Message-ID: <3BB50505.7EDE2C5B@sympatico.ca>
From: Lew Pitcher <lpitcher@sympatico.ca>
Newsgroups: alt.os.linux,alt.os.linux.mandrake,comp.os.linux.misc
Subject: Re: Defrag in linux? - Newbie question
References: <flQs7.504$Cu2.29521@eagle.america.net> 
Date: Fri, 28 Sep 2001 19:17:25 -0400
Sep 2001 01:29:03 EDT)
alt.os.linux.mandrake:184910 comp.os.linux.misc:349765

mac wrote:
> Do I need to defrag the HD in Linux?  If yes, how?

Short answer: No.

Long answer: see below

In a single-user, single-tasking OS, it's best to keep all
blocks for a file together, because _most_ of the disk accesses
over a given period of time will be against a single file. In
this scenario, the read-write heads of your HD advance
sequentially through the hard disk. In the same sort of system,
if your file is fragmented, the read-write heads jump all over
the place, adding seek time to the hard disk access time.

In a multi-user, multi-tasking, multi-threaded OS, many files are
being accessed at any time, and, if left unregulated, the disk
read-write heads would jump all over the place all the time. Even
with 'defragmented' files, there would be as much seek-time delay
as there would be with a single-user single-tasking OS and fragmented

Fortunately, multi-user, multi-tasking, multi-threaded OSs are
usually built smarter than accesses from multiple, unrelated
processes, with no order imposed on the sequence of blocks requested),
the device driver reorders the requests into something sensible
for the device (i.e elevator algorithm).

In other words, fragmentation is a concern when one (and only
one) process access data from one (and only one) file. When
more than one file is involved, the disk addresses being
requested are 'fragmented' wrt the sequence that the driver
has to service them, and thus it doesn't matter to the device
driver whether or not a file was fragmented.

To illustrate:

I have two programs executing simultaneously, each reading two different

The files are organized sequentially (unfragmented) on disk...

Program 1 reads file 1, block 1
                file 1, block 2
                file 2, block 1
                file 2, block 2
                file 2, block 3
                file 1, block 3

Program 2 reads file 3, block 1
                file 4, block 1
                file 3, block 2
                file 4, block 2
                file 3, block 3
                file 4, block 4

The OS scheduler causes the programs to be scheduled and
executed such that the device driver receives requests
                file 3, block 1
                file 1, block 1
                file 4, block 1
                file 1, block 2
                file 3, block 2
                file 2, block 1
                file 4, block 2
                file 2, block 2
                file 3, block 3
                file 2, block 3
                file 4, block 4
                file 1, block 3

Graphically, this looks like...


As you can see, the accesses are already 'fragmented' and we
haven't even reached the disk yet. I have to stress this, the
above situation is _no different_ from an MSDOS single file
access against a fragmented file.

So, how do we minimize the effect seen above? If you are MSDOS,
you reorder the blocks on disk to match the (presumed) order
in which they will be requested.  OTOH, if you are Linux, you
reorder the _requests_ into a regular sequence that minimizes
disk access. You also buffer most of the data in memory, and
you only write dirty blocks. In other words, you minimize the
effect of 'disk file fragmentation' as part of the other
optimizations you perform on the _access requests_ before you
execute them.

Now, this is not to say that 'disk file fragmentation' is a
good thing.  It's just that 'disk file fragmentation' doesn't
have the *impact* here that it would have in MSDOS-based
systems. The performance difference between a 'disk file
fragmented' Linux file system and a 'disk file unfragmented'
Linux file system is minimal to none, where the same performance
difference under MSDOS would be huge.

Under the right circumstances, fragmentation is a neutral
thing, neither bad nor good. As to defraging a Linux filesystem
(ext2fs), there are tools available, but (because of the design
of the system) these tools are rarely (if ever) needed or
used.  That's the impact of designing up front the
multi-processing/multi-tasking multi-user capacity of the OS
into it's facilities, rather than tacking multi-processing/ multi-tasking
multi-user support on to an inherently single-processing/single-tasking
single-user system.

Lew Pitcher

Master Codewright and JOAT-in-training
Registered Linux User #112576

Got something to add? Send me email.

(OLDER)    <- More Stuff -> (NEWER)    (NEWEST)   

Printer Friendly Version

-> -> file system fragmentation defragmentation linux ––>Re: Defrag in linux? - Newbie question


Increase ad revenue 50-250% with Ezoic

I would differ from the above explanation. The "experts" in many sites and articles are seeing at fragmentation as a process effecting file reads. I'd think its a process effecting file writes also.

File writes and subsequent updates are what cause fragmentation particularly when the disk is at 85% of its capacity.

The tracks being allocated in sectors or cilinders does not matter much.

What all the "experts" mean is DOS and Windows has less efficient disk access drivers than Unix varients. So they infer Linux file system does not have fragmentation issues which is wrong.

Fragmentation occurs on all hard disks except the proposed harddisks with bubble memory which we may see a long way in the future.

Fragmentation is a disk thing and not OS thing. Please dont say that Linux disks do not have fragmentation issues only with the reason that it employs better disk drivers.

Raja Surapaneni..
MCSE, CNE &amp; Linux fan


You aren't understanding. It's not that Unix and Linux disks don't have fragmentation, it's that fragmentation isn't the issue it is in single-user OSes, both because the file systems are designed with that in mind and because multi-use access is inherently fragmented.


Tue Aug 14 19:45:06 2007: 3077   anonymous

You just misread the original post ! Read carefully!

Thu Jun 25 02:50:33 2009: 6542   anonymous

Well, I don't do much (add software etc.) to my Linux system, but its file accessing is slowing down as is booting etc. just like a winx machine. I am new to Linux so most of what I do is open things, read the Linux book I have, surf the how to Linux sites and exercise the features to learn more about Linux, but it sure seems to me that some sort of file defrag is going on. The net seems to be evenly split about this happening and some developers are looking into it (they say anyway).

Fri Jun 26 05:50:29 2009: 6546   TonyLawrence


It is certainly possible that you have a specific situation where defragging would be helpful to your system.

However, I still think you need to read this again more carefully and also (link)

Kerio Samepage

Have you tried Searching this site?

Support Rates

This is a Unix/Linux resource website. It contains technical articles about Unix, Linux and general computing related subjects, opinion, news, help files, how-to's, tutorials and more.

Contact us

A common and not necessarily apocryphal example portrays a solo practitioner starved for business in a small town. A second lawyer then arrives, and they both prosper. (Deborah L. Rhode)

This post tagged: