When Big Disks Meets Slow Process…

Sometimes something simple and easy and not at all a “problem” can turn into a quagmire.

That’s what I’m stuck in at the moment.

It will eventually sort itself out, just a matter of time. But I thought I’d take this moment to explain my slightly lower than typical ‘participation rate’ in all things internet and posting.

A little while ago, I showed how to download a chunk of data from a web site with one easy command. I also showed “how that can go wrong” if you let it follow, recursively, parent links. No problem, thought I. (Or more correctly “problem kept away”…) as I plunged back into the task of just sucking down some subsets of the data.

Well, one thing leads to another, and as my 64 GB SD card on the R.PiM2 filled up, I had to cope. Either halt the whole thing and think about it (what would have been the right choice, since the ‘wget’ can be set to modestly rapidly rescan and only download missing / new bits) or pause the process, shuffle some data around, and then be ‘fancy’ about the restart. Wanting to show off to myself how ‘trick’ I could be, I chose that option.

Linux / Unix have a nice little feature. In a terminal window with a running process you can ‘susspend’ the process and return to a command shell to ‘do something’ and then resume the process. This is done with CRTL-z to background and “fg” to bring it to the foreground and start it running again.

I had one transfer going on that had used most of the space, and two smaller ones that had not used much, so simple… just pause the ‘big one’, move that data to a different disk (plugged in a USB disk with about 70 GB free and a nice EXT type linux file system) and then moved the directory where everything was being stored onto that disk. Last step was to put in place a ‘symbolic link’ in the old location that says “go look over there now”. At that point, about 34 GB of free space on the chip and “fg”… Off to the races again.

root@RaPiM2:/MIRRORS# ls -l
lrwxrwxrwx 1 root root   27 Aug 31 22:14 cdiac.ornl.gov -> /WD/MIRRORS/cdiac.ornl.gov/
drwxr-xr-x 3 pi   pi   4096 Aug 29 03:17 ftp.ncdc.noaa.gov

Notice that my ‘prompt’ says I’m in the “/MIRRORS” directory. Here there are top level directory names for each of the sites that are being mirrored. CDIAC (Carbon Dioxide Information …) was ‘the big one’ that I moved. That first line is a ‘link’, so notice that the first character is an ‘l’ for link; while the one just below it is a ‘d’ for ‘directory’. Here you can see that /MIRRORS/cdiac.ornl.gov is now pointed over to /WD/MIRRORS/chieac.ornl.gov instead. Now, when the wget command is resumed, it just carries on as though the data were still in /MIRRORS, but it gets redirected to the new home on the /WD disk.

And things continued to run for a few more days… Until both are filling up again../

root@RaPiM2:/home/pi# df
Filesystem     1K-blocks     Used Available Use% Mounted on
rootfs          59805812 56704004     40768 100% /
/dev/root       59805812 56704004     40768 100% /
[...]
/dev/sdb2       50264772 46496840   1207932  98% /WD

At this point I have all three transfers paused with CRTL-Z.

Over on the SD card:

root@RaPiM2:/MIRRORS/ftp.ncdc.noaa.gov/pub/data# ls noaa
1901  1912  1923  1934	1945  1956  1967  1978	1989		  dsi3260.pdf		   ish-format-document.pdf  ishJava_ReadMe.pdf
1902  1913  1924  1935	1946  1957  1968  1979	1990		  isd-history.csv	   ish-history.csv	    ish-qc.pdf
1903  1914  1925  1936	1947  1958  1969  1980	1991		  isd-history.txt	   ish-history.txt	    ish-tech-report.pdf
1904  1915  1926  1937	1948  1959  1970  1981	1992		  isd-inventory.csv	   ish-inventory.csv	    NOTICE-ISD-MERGE-ISSUE.TXT
1905  1916  1927  1938	1949  1960  1971  1982	1993		  isd-inventory.csv.z	   ish-inventory.csv.z	    readme.txt
1906  1917  1928  1939	1950  1961  1972  1983	1994		  isd-inventory.txt	   ish-inventory.txt	    station-chart.jpg
1907  1918  1929  1940	1951  1962  1973  1984	1995		  isd-inventory.txt.z	   ish-inventory.txt.z	    updates.txt
1908  1919  1930  1941	1952  1963  1974  1985	1996		  isd-problems.docx	   ishJava.class
1909  1920  1931  1942	1953  1964  1975  1986	1997		  isd-problems.pdf	   ishJava.java
1910  1921  1932  1943	1954  1965  1976  1987	1998		  ish-abbreviated.txt	   ishJava.old.class
1911  1922  1933  1944	1955  1966  1977  1988	country-list.txt  ish-format-document.doc  ishJava.old.java
root@RaPiM2:/MIRRORS/ftp.ncdc.noaa.gov/pub/data# du -ks noaa/
30484864	noaa/

So I’m all the way up to near the end of the last century at year 1998, only 17 more years of data to go… and it’s already at over 30 GB…

The other transfer was in GHCN at the same site.

21861060	ghcn

Almost 22 GB there and rising… The “biggy” there being “all”

18444948	all
36	COOPDaily_announcement_042011.doc
124	COOPDaily_announcement_042011.pdf
68	COOPDaily_announcement_042011.rtf
2822796	ghcnd_all.tar.gz
4	ghcnd-countries.txt
139556	ghcnd_gsn.tar.gz
281244	ghcnd_hcn.tar.gz
25676	ghcnd-inventory.txt
4	ghcnd-states.txt
8236	ghcnd-stations.txt
4	ghcnd-version.txt
24	readme.txt
32	status.txt

which is “almost done”… but not quite….

Now the first thing to realize is that had I done any ONE of these at a time, the total data transfer is highly likely to have fit on the SD card, or if not, on the external disk. It would have taken no more total time (as it is bandwidth limited on my internet pipe) and I’d have had far more ‘feedback’ along the way about how things were going.

But, since there is no sizing information on ‘how big’ on the web sites, I chose to just hope it would all fit and launched all three to “complete at night while I’m in bed”, which was about 4 or 5 days ago…

Lesson One: Do not ever guess how big a transfer / mirror will be from a government site. They are paid to create volume… (whenever I get this all done, I’m going to put up a set of “how big are they’ numbers for the sub-directories…)

But Wait! There’s MORE!!

So I’m thinking “No Problem, I’ve done this once, I can use the same fix again”.

Lesson Two: Beware of Hubris… especially if “I’ve done this before” crosses your mind.

Pawing around on some disks, I see that I have a duplicate copy of one of the disks onto the newly bought 2 TB disk. It is a 1 TB disk, but only about 3/4 of a TB are used. Surely a couple of hundred GB will be enough for ONE of these data sets, perhaps two or all three… but the disk is formatted NTFS. Linux can read and write that, but it isn’t very efficient and I’m not all that keen on swapping file system types under a running paused process… So, bright idea time, I’ll just shrink that NTFS partition and make an EXT one on the free space.

I boot up the CentOS box on a ‘rescue CD’ that has a nice little disk utility on it that I know works well doing exactly this as I’ve used it many times before ( Important… Not a time to find out which version doesn’t resize NTFS quite right…) and I’m ready to go. I launch “gparted”. Nice graphical User Interface and all.

Gparted examines the disk, finds the partitions, tells me how much is free. I tell it to take that big fat NTFS partition and shrink it down to about 7 GB free space. I want to leave a little bit of working space in case I need to later move, uncompress, or whatever some small files and not worry about it too much. No Problem, says Gparted. I get the layout set up as I want it (free space ahead, after, etc.) and click the “do it” button (actually a giant green Check Mark on the set of graphical commands). It pops up the “really?” and I say yes. It pops up the “Doing it NOW!” dialog box with the helpful note “Depending on the number and type of operations, this might take a long time”.

That was yesterday. Now I’m looking at this thinking “No Shit Sherlock!”. And again we have no indication of percent done or how big ‘long’ might be.

Lesson Three: A TByte is big. Really Really Big. It is not made smaller by being cheaper now. Moving it around takes a very very long time. Especially over slow WAN links, on slow SD Cards, and as a ‘tower of Hanoi’ (that I’m guessing they did in Gparted) defragment, relocate, resize on a slow NTFS file system on a slow USB-2 spigot.

So now I’ve got the main monitor tied up on the R.PiM2 with three paused processes, the WD disk tied up until I can move data (and it has my home dir on it for the EVO… that needs the same screen as is used by the R.Pi anyway – so doubly out of action) and the ASUS is busy fondling it’s disk. Oh, and the ChromBox can’t be used as it needs one of th;e two monitors that are both in use and locked to a process.

All of which leaves me with the Tablet as my only “do whatever” machine. Which works well for reading, not so well for things like making postings. (It is also pretty good at downloading, though it is a bit of an annoyance to pull the mini-sd card out of it to move bulk… which would need one of the busy machines anyway…)

So I’m making this posting from the Raspberry Pi Model2, despite it being ‘cluttered’ at the moment with a bunch of windows with paused processes and short on ‘disk’ space and… Sigh.

Future Light

I’m hopeful that sometime today the partition resizing will complete. I’m also thinking I need to get a USB-3 speed box if I’m going to play with TB USB disks a lot. USB-2 is just way slow at that size. I’m also thinking that just tossing another $50 at a dedicated TB disk and formatting it to EXT would have been faster and smarter… But I’m now stuck in the muck and “woulda coulda shoulda” is not as important as “ought to do now”.

So for now, postings will be slower and a bit more limited as my data archives are kind of spread around and “in play”, the various boxes used for different things are ‘locked up’ or ‘locked out’, and I’m once again “Waiting For I/O, or someone like him…” to complete.

Once the disk diddle is done, I can get on with the format of EXT, put it on the R.PiM2 (that is known to take a ‘hot plug via the USB powered hub’ ok – it will crash a Pi with direct hot plug via the power sag), move about 30 GB to 50 GB of data, make the symbolic links, foreground a couple of processes, and once again saturate my internet pipe… in the hope that it will complete in a day or two.

That is the dim light I’m seeing at the end of this Big Data Small Pipe tunnel…

“But Hope is not a strategy. -E.M.Smith”, so there may be more ‘amusement’ to come…

Subscribe to feed

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in Humor, Tech Bits. Bookmark the permalink.

22 Responses to When Big Disks Meets Slow Process…

  1. Kevin Lohse says:

    Had to read the title twice. As one who has to suffer the slings and arrows of outraged progeny whenever I attempt anything exotic on my computer, I take my hat off to a brave, brave man.

  2. LG says:

    Chief IO,
    Please consider explicitly spelling out the commands you entered for duplicating the following :

    This is done with CRTL-z to background and “fg” to bring it to the foreground and start it running again.

    Thanks.

  3. p.g.sharrow says:

    @EMSmith; serious chuckling here……;-) I thought that, that was the idea behind the RasPi. set it to ONE job and walk away,. shift monitor and keyboard etc and your attention to another computer, to another task. I guess that this is just another learning experience for everyone… :) pg

  4. Larry Ledwick says:

    You can play with unix/linux job control commands with a simple loop like the following:

    $ testserver /mxhome/myuser> while true
    > do
    > date
    > sleep 30
    > done
    Fri Sep  4 13:49:15 MDT 2015
    Fri Sep  4 13:49:45 MDT 2015
    ^Z[1] + Stopped                  while true;do;date;sleep 30;done
    $ testserver /mxhome/myuser> fg
    while true;do;date;sleep 30;done
    Fri Sep  4 13:50:15 MDT 2015
    Fri Sep  4 13:50:45 MDT 2015
    ^C$ testserver /mxhome/myuser>
    
  5. Larry Ledwick says:

    http://linuxreviews.org/beginner/jobs/
    search for jobs, jobcontrol fg bg and there are a boat load of pages on it. I rarely use it but it is nice when you forget to background a job with & at the end of the command string. Just cntl Z then bg and you have it running in the background and your session window is usable again.

  6. Larry Ledwick says:

    On the issue of big drives, I do backups for my photography to usb 2.0 docking drives with 2TB drives with an i7 64 bit windows box, it takes 24 hours to write a complete disk image from one 2TB drive to a new 2TB drive on my system. (commodity Western Digital green drives)

  7. E.M.Smith says:

    @LG:

    Um, I thought I did…

    To kill a running command type: CONTROL and c at the same time. Typically written as Ctrl-c
    To pause a running command, type CONTROL and z at the same time. Written Ctrl-z
    Once paused, to start the command running again, type fg and it resumes in your active window. Type bg and it resumes in the background leaving your active terminal window free for other use.
    To see a running background process, type ps
    Then to kill a background process, take the process number (pid) from that ps command and put it in a kill command. IFF for example the pid was 6856 you would type
    kill 6856
    or if that doesn’t kill it, type
    kill -9 6856
    The -9 means the program or command can not choose to ignore you…

    Does that help?

    Linux and UNIX commands are often only 2 or 3 characters long… vowels frequently omitted. (don’t blame me, I just use it :-)

  8. E.M.Smith says:

    @PG:

    I have the RPiB+ running headless, and will likely start running the RPiM2 headless now…. but since the monitor was just sitting there and it was “unlikely to take more than overnight” or so I thought…. and the Antek Asus box has a loud fan while the RPi is silent in the dead of night… besides “it was never a problem before”…. mumble grumble spft…

    @Larry:

    Good stuff. Thanks for the links. I’ve lived Linux from the start of it and UNIX from about 1982, so 33 years now, and sometimes forget newby POV links…

    Well I’m coming up on 24 hours of resizing, but some web searching said NTFS resize is particularly slow due to constant journalling adjustments and block tracking on each read write block… or some such… so maybe by dinner tomorrow 8”}>

    @Kevin:

    Glad someone noticed the subliminal easter egg :-)

  9. LG says:

    Thanks Chief IO.
    Presumably, the initial jobs were issued with terminating & .

  10. E.M.Smith says:

    @LG: The initial jobs were just running in live terminal windows (no &) which is why I could use Ctrl-z to pause them. If launched with & you can kill the PID, or “renice” it to lower priority, but if there is a way to pause it, I don’t know it. Likely just from never having needed to do it.

    At this point, the resize of the disk is ongoing and I’m “one finger typing” on the tablet… doing nothing can be terribly hard :-]

  11. Larry Ledwick says:

    Looks like there is a signal that will pause and restart a process.
    On Centos (ksh and bash) kill -l (that is a lower case L will list the signals that can be used.
    The bash output looks like this:

    $ moria /mxhome/lledwick> kill -l
     1) SIGHUP       2) SIGINT       3) SIGQUIT      4) SIGILL       5) SIGTRAP
     6) SIGABRT      7) SIGBUS       8) SIGFPE       9) SIGKILL     10) SIGUSR1
    11) SIGSEGV     12) SIGUSR2     13) SIGPIPE     14) SIGALRM     15) SIGTERM
    16) SIGSTKFLT   17) SIGCHLD     18) SIGCONT     19) SIGSTOP     20) SIGTSTP
    21) SIGTTIN     22) SIGTTOU     23) SIGURG      24) SIGXCPU     25) SIGXFSZ
    26) SIGVTALRM   27) SIGPROF     28) SIGWINCH    29) SIGIO       30) SIGPWR
    31) SIGSYS      34) SIGRTMIN    35) SIGRTMIN+1  36) SIGRTMIN+2  37) SIGRTMIN+3
    38) SIGRTMIN+4  39) SIGRTMIN+5  40) SIGRTMIN+6  41) SIGRTMIN+7  42) SIGRTMIN+8
    43) SIGRTMIN+9  44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12 47) SIGRTMIN+13
    48) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14 51) SIGRTMAX-13 52) SIGRTMAX-12
    53) SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9  56) SIGRTMAX-8  57) SIGRTMAX-7
    58) SIGRTMAX-6  59) SIGRTMAX-5  60) SIGRTMAX-4  61) SIGRTMAX-3  62) SIGRTMAX-2
    63) SIGRTMAX-1  64) SIGRTMAX
    

    ksh output is just one long list.

    Per http://www.linux.org/threads/kill-commands-and-signals.4423/

    SIGCONT – To make processes continue executing after being paused by the SIGTSTP or SIGSTOP signal, send the SIGCONT signal to the paused process. This is the CONTinue SIGnal. This signal is beneficial to Unix job control (executing background tasks).

    SIGSTOP – This signal makes the operating system pause a process’s execution. The process cannot ignore the signal.

    I have sometimes found that kill signals don’t do exactly what you expect so best test these in a benign case before using them on important processes — YMMV depending on what shell you are in etc.

  12. E.M.Smith says:

    @Larry:

    Doesn’t surprise me… I’d remembered there being a few dozen SIGnals, and figured one of them might do it, but just ‘fessed up that I didn’t know which one and that due to my work habits had never needed to ‘go fish’ for it.

    Linux / Unix has pretty good Job control but I’ve always used it more interactively with the things that run ‘in the background’ usually done via cron after a lot of testing so not much need to do anything but kill them if they are rogue or zombie.

    My “habit” is to just open a terminal window and run a command / script / whatever in it. Since you can have a few dozen windows open, it makes for a very easy management method. Occasionally doing the whole Ctrl-z / fg thing if I think there is a risk of losing the terminal connection. The operator staff did more with job control, but mostly I just had to know how to hire good ones ;-)

    I did go through the canonical list of SIGs once, back about 1983? to familiarize myself with them. Then picked the ones I was likely to use and used them. Guess after 32 years the memory fades a little ;-) That’s what man pages are for.

    @All: The Manual is stored on line in pages called “man pages” – short for MANual. You can get them by typing “man {command name of interest}” so you can say “man ls” to get information about the ‘ls’ command. Of course, the very first command to learn after how to log on is “man man”… On some Linux releases some yahoo has tried to deprecate ‘man’ in favor of ‘info’. I think that’s just dumb. I’ve noticed that despite a half decade plus of trying to kill off ‘man pages’, the command ‘man’ still works ;-) The next command to learn is “apropos” as it finds things ‘appropriate’ in the man pages. So “apropos print” will list all the commands in the man pages that reference print.

    @All:

    The disk resize is still ongoing so we are well into a ‘couple of days’. I’m inclined to let it keep going mostly just so that the existing copy of the data is not lost. Then again, I already have a second copy… and a simple “nuke and reformat” would likely be quicker. Then again, again, I’m curious just how long this is going to take… Maybe I’ll run over to Best Buy and pick up a ‘small’ 1/2 TB disk and format it EXT and move on while this one plays with itself for the rest of the weekend…

  13. Larry Ledwick says:

    I am one of those operator folks and sometimes could really use a “stop that process now” command. We have a program that controls lots of our jobs according to limits we set in a table determining how many of each process can run at one time. Under Solaris (ksh) we could send a kill -POLL to that job and get it to go look at the table values, when we cut over to Centos, they changed default shells to bash and bash does not honor the kill -POLL signal and the kill -SIGIO which is supposed to behave like kill -POLL does not work (still have not figured out why).

    So for me it was really good to go find that kill -SIGSTOP option because that signal cannot be ignored. When things run away and the system is spawning new processes as fast as they blow up, after some bug bites us it will be really nice to be able to freeze everything before we have 2000 bombed jobs to clean up. We can pause that management job with a kill -USR1 and unpause it with a kill -USR2 but unfortunately it does not always catch the signal if it is busy with other stuff, and you have to send it multiple times to be sure the job gets paused.

    Job control can be entertaining when things are in a death spiral toward a crash. ;)

  14. E.M.Smith says:

    @Larry:

    Well, back a long long time ago… in college… when I was a student “pee-on”…

    The Operators of the Burroughs B6700 could only kill jobs one a time by typing:

    DS {job_number}

    DS being for DiScontinue…

    So when an operator was particularly obnoxious there was this “thing” that could be done. The B-6700 used all “free” disk space as “available for swap”. Good in theory. Unless you have a PO’ed college student on your hands… It also had a feature called “tasking” in their ALGOL where a program could spit out another program. Rather like “fork”ing in Unix land or more like “spawning”… At any rate, you could write a program called, say “A” and in it put:

    BEGIN:
    WHILE TRUE
    DO
    TASK B;
    OD

    and in a program named B have

    BEGIN
    WHILE TRUE
    DO
    TASK A;
    OD

    And then if you simply launched one of them, either one, from, say, an account not connected with your name… Well,,,

    There was an exponential explosion of processes, which would gradually start filling up all available swap, which was all free disk, and by the time an operator noticed the first of them, there were too many to “DS” no matter how fast you were. In about 2 minutes the machine was visibly slow. In about 5, it halted. In about 10 it would be booted up again….

    A quicker to type version was just in A to put:

    BEGIN WHILE TRUE DO TASK A; OD

    then type “TASK A” at the prompt and walk away…

    Somehow all the nice operators never had the machine melt down under them and require a reboot and restart of all the jobs and ….

    On behalf of all cranky students being shafted everywhere by “authority”, I deeply apologize… and I’ve been very kind to all operations staff (mine and otherwise) ever since…

  15. p.g.sharrow says:

    @EMSmith; I once knew a geeky young man 30 years ago. He was concerned about the future machinations of government officials. I told him the programming of computers was a good place to be as in this new field the Geeks would have the upper hand. They would always be a generation ahead of the abilities of bureaucrats. The keyboard would be more powerful then governance. Not sure what became of him. ..pg

  16. kneel63 says:

    For job control, you can also use “jobs”, which shows backgrounded processes started from your shell instance. bash allows a command substitution for these too – %1 is job 1. So re-starting a stopped job in the background can be as simple as “%1 &”.
    As in:
    ./do_something

    ./do_another &
    ./do_third &
    jobs
    [1] ./do_somthing [stopped]
    [2] ,/do_another & [running]
    [3] ./do_third & [running]

    %3

    1 and 3 now stopped, 2 still running
    %1 &
    1 and 2 now running, 3 stopped

    Note that (obviously?) processes that aren’t attached to the terminal will stall on terminal IO.
    Also note that that unless the process handles HUP signals, killing the controlling terminal will kill the process – so logging out of a shell kills all your & and bg jobs unless they specifically written not to die.
    You can bypass this behaviour by running “screen”, which attaches (and detaches) virtual terminals that survive parent death.

  17. kneel63 says:

    Aarg! I had typed ctrl-z’s in there, but stoopidly used angle brackets…

  18. kneel63 says:

    ps
    screen is very useful when you want to run processes in the background that will run for a long time and you want to use a “verbose” option to keep an eye on where they’re up to. Just start a screen process, run your command in the foreground, detach the real terminal from the virtual and carry on. When you need a peek at where its up to, reattaach and look, then detach and carry on. Although that sounds much more complicated than another terminal on your desktop, screen will survive your logout and you can even reattach as a different user as well as from another login session.

  19. E.M.Smith says:

    @kneel63:

    Useful stuff. I may play with it… While I rarely have been bitten by the ‘terninal crash jobs lost’ problem, it is a risk that sometimes (like now) I’d like to be able to mitigate.

    @All:

    Well, I decided to ‘bundle up’ on of the data sets and shove it off as a tarball onto an NTFS disk, then did a fg to forground the other one (it having 30 GB of free space now). It then hung on the second file. I suspect due to either some kind of timeout effect, or more likely the ‘daily’ files had changed in away it was unprepared to deal with… at any rate, that gave me the ‘opportunity’ to restart it with a slightly different command. First off, no speed limits. (Need to make up some lost time, STEP ON IT! ;-) Also, not using ‘time stamps’ (the implied -N in the -m option).

    So to just run through the jungle and NOT change any old files you have (even if a newer version arrived … at this point I want ONE set and not spend time updated the mirror of the daily changes) I’m using a command like:

    wget -nc -np -r -l inf ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/

    So instead of -m (that implies -l inf -N -r) I’m explicitly calling out -l inf -r but not the -N so no time stamps being looked at. -np still says don’t go up to parent links and out to the whole site, while -nc says “no clobber” i.e. if you already have a copy of any date stamp, just skip it and go on to the next one.

    It is now flying along and hopefully that one will finish “soon” (been running another 12 hours now) and I can then let the second one rip the same way (after moving this one as a tarball out of the way).

    So once again “things are running”. Even if the disk resize on the TB disk is STILL dragging along. I’m going to give it to about Tuesday and then just give up, reformat, and reload the data from another source. And “note to self”: It is Far Far Far Faster to tarball the 750 MB off somewhere else, reformat, and restore than to “resize” a TB disk with Gparted. We’re talking a few hours to tarball / format vs a few days (so far!) to resize…

  20. Larry Ledwick says:

    I agree on screen we use it at work for backup sessions, so anyone can look in on backups started by another user. It is a great solution for any long running process especially if you will need to have several different users interact with the process (like change tapes on backups).

    You can create sessions with customized names that everyone understands, and screen -ls will list all the available screen sessions you can jump into.
    The only problem with screen is breaking the conditioned reflex to use control -C in a window, as that will kill the screen session, you need to get out of the session gracefully with Alt-a and Alt-d to keep the process and session alive.

  21. E.M.Smith says:

    Well, here we are only 3 DAYS later and the disk partition resize operation has finally completed.

    Now partly this is because the Antek / ASUS box is particularly lousy in how it handles USB drives. Things that are done in one period of time on other machines seem to take 2 or 3 times as long on it, and it complains about things plugged into the slower type of USB port and says I could speed them up by plugging into the faster ports (and even lists them on the printout) but NO port is any faster. So partly this might just be tickling some particular ASUS motherboard USB bug. OTOH, the USB works, so…

    Also, this isn’t the worlds fastest USB disk. It’s a several years old Toshiba that’s USB 2 only. I don’t remember exactly how many years old, but I’d guess about 8? Once it’s all done I’ll see if there is a date on it. (Right now I’ve sent it off to make the EXT file system on the 140 GB space).

    And there are web postings of other folks “giving up” after more than 24 hours of Gparted doing a resize on even smaller NTFS partitions, so I think there is a more general problem. Looking inside Gparted, it just calls the program “ntfsresize”, so that is where the basic issue resides, not in Gparted per se. Maybe “someday” I’ll bother figuring out what about ntfsresize is so horridly slow on large NTFS disks. For now, I’m just glad it is over and that I can ‘move on’….

    For the “Lessons Learned” department: If desiring to ‘resize’ a large NTFS partition on a USB disk either use the MIcrosoft tools (that seem to be faster) or better yet, dump the data to a backup, reformat to the desired partitions, and then restore the data. It will be faster.

    I’ve also learned a ‘new trick’ as of last night, but that will be for another comment after this process is worked through. (The ‘just copy GHCN’ is now down to only 2 GB free space so in the next couple of hours, I need to get this disk mounted, stuff moved, and more space available to the process. Yes, once again I’m in a race condition with this ‘duplicate the data’ wget…)

  22. Pingback: Using Loop and Sparse File for EXT on NTFS File System | Musings from the Chiefio

Comments are closed.