Fri, Feb 20 2004
04:47:29
|
|
Request created by guest
|
|
Subject: r.cost: too much hard disk access with big regions
Platform: GNU/Linux/i386
grass obtained from: Mirror of Trento site
grass binary for platform: Compiled from Sources
GRASS Version: 5.3 cvs feb 2004
Hi,
When using r.cost for a 3130 x 4400 cell region, r.cost is very very slow. This
seems to be because it is spending all its time reading & writing to the disk
-- the processor use is usually pretty low (sub 50%) while it waits. There are
four temporary files created in this example region, 2x 122mb [in_file, out_file],
and two others at the end which are both pretty small. Memory use for this example
is ~126mb. I've got a ~ 70% MASK in place, don't know how much that is helping
me here. (CELL map)
It would be great if it could load the temp files into memory instead (perhaps
by an option flag) to speed up processing for those with lots of RAM (here >512mb)
on their systems.
I don't think support for a 5000x5000 map size is too much to ask for.
I don't know enough memory voodoo to implement this properly myself..
thanks,
Hamish
|
|
Fri, Feb 20 2004
11:22:26
|
|
Mail sent by glynn.clements@virgin.net
|
|
Return-Path |
<glynn.clements@virgin.net>
|
Delivered-To |
grass-bugs@lists.intevation.de
|
From |
Glynn Clements <glynn.clements@virgin.net>
|
MIME-Version |
1.0
|
Content-Type |
text/plain; charset=us-ascii
|
Content-Transfer-Encoding |
7bit
|
Message-ID |
<16437.49028.642380.715312@cerise.nosuchdomain.co.uk>
|
Date |
Fri, 20 Feb 2004 08:04:20 +0000
|
To |
Request Tracker <grass-bugs@intevation.de>
|
Cc |
grass5@grass.itc.it
|
Subject |
Re: [GRASS5] [bug #2327] (grass) r.cost: too much hard disk access with big regions
|
In-Reply-To |
<20040220034729.BB42C139AD@lists.intevation.de>
|
References |
<20040220034729.BB42C139AD@lists.intevation.de>
|
X-Mailer |
VM 7.07 under 21.4 (patch 15) "Security Through Obscurity" XEmacs Lucid
|
X-Spam-Status |
No, hits=-4.9 tagged_above=-999.0 required=3.0 tests=BAYES_00
|
X-Spam-Level |
|
Request Tracker wrote:
> this bug's URL: http://intevation.de/rt/webrt?serial_num=2327
> -------------------------------------------------------------------------
>
> Subject: r.cost: too much hard disk access with big regions
> When using r.cost for a 3130 x 4400 cell region, r.cost is very very
> slow. This seems to be because it is spending all its time reading &
> writing to the disk -- the processor use is usually pretty low (sub
> 50%) while it waits. There are four temporary files created in this
> example region, 2x 122mb [in_file, out_file], and two others at the
> end which are both pretty small. Memory use for this example is
> ~126mb. I've got a ~ 70% MASK in place, don't know how much that is
> helping me here. (CELL map)
>
> It would be great if it could load the temp files into memory instead
> (perhaps by an option flag) to speed up processing for those with lots
> of RAM (here >512mb) on their systems.
r.cost uses the segment library; changing that would probably involve
substantially re-writing r.cost. It would probably also put a ceiling
on the size of maps which it could handle (unless you provide both
segment-based and memory-based implementations of the algorithms).
However: increasing the segments_in_memory variable may help; maybe
this should be controlled by a command-line option.
--
Glynn Clements <glynn.clements@virgin.net>
|
|
Thu, Apr 15 2004
08:46:29
|
|
Mail sent by hbowman
|
|
> > Subject: r.cost: too much hard disk access with big regions
>
> > When using r.cost for a 3130 x 4400 cell region, r.cost is very very
> > slow. This seems to be because it is spending all its time reading &
> > writing to the disk -- the processor use is usually pretty low (sub
> > 50%) while it waits. There are four temporary files created in this
> > example region, 2x 122mb [in_file, out_file], and two others at the
> > end which are both pretty small. Memory use for this example is
> > ~126mb. I've got a ~ 70% MASK in place, don't know how much that is
> > helping me here. (CELL map)
> >
> > It would be great if it could load the temp files into memory instead
> > (perhaps by an option flag) to speed up processing for those with lots
> > of RAM (here >512mb) on their systems.
>
> r.cost uses the segment library; changing that would probably involve
> substantially re-writing r.cost. It would probably also put a ceiling
> on the size of maps which it could handle (unless you provide both
> segment-based and memory-based implementations of the algorithms).
>
> However: increasing the segments_in_memory variable may help; maybe
> this should be controlled by a command-line option.
Increasing that shaves a few seconds off, but doesn't have any great effect.
I think a releated (perception) problem may be the G_percent() during the
"Finding cost path" step isn't correct. It both isn't linear & the calculation
finishes way before 100%.
src/raster/r.cost/cmd/main.c line 658:
G_percent (++n_processed, total_cells, 1);
I should have mentioned I'm using a new serial-ATA hard drive. Although I
haven't spent any time tuning it, it's bloody fast.
Hamish
|
|
Wed, Dec 15 2004
05:07:07
|
|
Subject changed to r.cost: Finding cost path % done is all wrong by hbowman
|
|