Table of contents
Combined revision comparison
Other changes:
- /body/table[2]/@style:
"width: 100%;"⇒ nothing
Version from 09:49, 4 Mar 2025
Correlation Status
Project Code | Block Code | Sources | DOYS | UT | Freq | Stations | Status | PI | Comment |
---|---|---|---|---|---|---|---|---|---|
f242a | 284 | 86 | European | ||||||
c242a | 284 | 86,43 | Global | ||||||
c242b | 285 | 86 | Global | ||||||
c242c | 286 | 86 | Global | ||||||
c242d | 286 | 86,43 | Global |
General comments
Stations
- Nn has pad N09
- Apex joining for the first time
- Ef is out
Observing Notes
- Fringe test was successful in detecting fringes between Nn, Pv, On, Ys, Mh.
- Pv stowed at ~07:15 UTC until scan 576, stopping again at 20:20 UTC, back at scan 743, then stopped after an hour.
- Nn stowed due to wind at 16:20 UTC and until 23:40 UTC(c242a)
- Apex out initially due to power failure, on source ~13:30 UTC
- Ys lost scans 209-222
- Oct 12 Nn stopped at 14:07
- Pv on sky since 14:30 UTC (scan 320).
- c242c/d Ys very foggy, cloudy
- Pv stopped due to strong wind at 19:45 UTC.
- Oct 13 Nn started with scan at 05:45 but had acquisition problems 07:30-08:30
Mounting the APEX GMVA Module
APEX data are on BHC%0141 in CD502. In order to hand-carry GMVA data from APEX to Bonn, data were consolidated from two Mark6 modules (2 x 8 disks) onto a more readily transportable set of 8 loose disks.
Data of one polarization are in the standard per-disk subdirectory 'data', data of the other polarization are in 'GMVA_slot2'.
To mount the "two modules" contained on BHC%0141, use:
d281 # assuming that BHC%0141 is in slot 1: fuseMk6 -r '/mnt/disks/1/*/data' /`hostname -s`_fuse/1 fuseMk6 -r '/mnt/disks/1/*/GMVA_slot2' /`hostname -s`_fuse/2
Yebes data layout
Info from Javi Gonzales Garcia: we configured our FiLa10G with the following parameters:
2024.284.15:54:25.34/form/wastro 2024.284.15:54:24.83#dbbcn#fila10g/VDIF Frame properties: 2024.284.15:54:24.83#dbbcn#fila10g/ channel width (in bits) : 2 2024.284.15:54:24.83#dbbcn#fila10g/ number of channels per frame : 4 2024.284.15:54:24.83#dbbcn#fila10g/ payload size (in bytes) : 8000 2024.284.15:54:24.83#dbbcn#fila10g/ => frame size (in bytes) : 8032 2024.284.15:54:24.83#dbbcn#fila10g/ => number of frames per second : 128000 (64bit@128MHz) 2024.284.15:54:24.83#dbbcn#fila10g/ => number of data threads : 8 2024.284.15:54:24.83#dbbcn#fila10g/ => number of frames per thread : 16000 (8bit@128MHz)
And cornerturning was on, thus we recorded each VDIF thread in a separate file (8 files in total). Channel mapping to the channel ID in the VEX file would be:
VDIF Thread channel # | DS0 | DS1 | DS2 | DS3 | DS4 | DS5 | DS6 | DS7 |
---|---|---|---|---|---|---|---|---|
1 | &CH01 | &CH05 | &CH09 | &CH13 | &CH17 | &CH21 | &CH25 | &CH29 |
2 | &CH02 | &CH06 | &CH10 | &CH14 | &CH18 | &CH22 | &CH26 | &CH30 |
3 | &CH03 | &CH07 | &CH11 | &CH15 | &CH19 | &CH23 | &CH27 | &CH31 |
4 | &CH04 | &CH08 | &CH12 | &CH16 | &CH20 | &CH24 | &CH28 | &CH32 |
APEX Disk Recovery - for future reference
During unrelated tests at MPIfR, unfortunately the filesystem metadata on 1 out of the 8 disks got erased, i.e., "erased" part of the module. During a later trip to APEX the missing 'GMVA_slot2' files of that disk were copied out from the still existing module there. These were then integrated back into BHC%0141. The missing 'data' files of that disk were less trivial to recover. Nevertheless, full recovery was successful. Module BHC%0141 contains the full original data again. For future reference the steps were:
# Make a low level backup of the wiped disk root@mark6-08> cd /data/gmva2024_2/ root@mark6-08> dd bs=1M if=/dev/sdb of=apex-module-disk1-wiped.raw status=progress root@mark6-08> chmod a-w apex-module-disk1-wiped.raw root@mark6-08> fdisk -lu apex-module-disk1-wiped.raw # Start End Size Type Name 1 2048 15627857919 7.3T Microsoft basic MPIH%024_5 2 15627857920 15628052479 95M Microsoft basic MPIH%024_5m # Grab the XFS file system structure from an intact disk root@mark6-08> cd /data/gmva2024_2/ root@mark6-08> losetup --read-only -o $((512*2048)) /dev/loop1 /dev/sdc root@mark6-08> xfs_metadump -g -f -o -w -a /dev/loop1 apex-module-disk2-intact.xfs_metadump root@mark6-08> losetup -D ; losetup -a # Transplant XFS structure from intact disk onto wiped-disk raw content root@fxmanager> cd /data/gmva2024_2/ root@fxmanager> dd bs=512 if=apex-module-disk1-wiped.raw \ of=recovery-attempt.fs skip=2048 count=$((15627857919-2048+1)) status=progress conv=notrunc root@fxmanager> dd status=progress conv=notrunc bs=512 count=1024 \ seek=15627855872 if=/dev/zero of=recovery-attempt.fs # appends a bit of 0x00 padding root@fxmanager> losetup -v -o 0 /dev/loop0 recovery-attempt.fs root@fxmanager> xfs_mdrestore -g apex-module-disk2-intact.xfs_metadump /dev/loop0 2070 MB read root@fxmanager> mkdir cloop ; mount /dev/loop0 ./cloop/ -txfs -oro # success! # Copy out data from the mounted loop device i.e. from the fixed xfs partition: oper@fxmanager> cd /data/gmva2024_2/ ; mkdir recovered_content oper@fxmanager> cp -anv ./cloop/data/*.vdif ./recovered_content/ oper@fxmanager> mkdir recovered_GMVA_slot2 oper@fxmanager> cp -anv ./cloop/data/*.vdif ./recovered_GMVA_slot2/ # Restore content: init the half-wiped partitions, restore Mk6 metadata root@mark6-08> mkfs.xfs -f /dev/sdb1 root@mark6-08> mkfs.xfs -f /dev/sdb2 root@mark6-08> mount /dev/sdb2 /tmp ; cp -av /mnt/disks/.meta/1/2/* /tmp; umount /tmp # # 1) Add GMVA_slot2 data from new disk from post-GMVA APEX visit # (could actually use ./recovered_GMVA_slot2/, too, but did not get to proceed # with the low-level recovery attempts until after the post-GMVA APEX visit :P) stop & start mk5daemon oper@mark6-08> sudo mount /mnt/disks/1/2/ -oremount,rw root@mark6-08> cp -anv /mnt/disks/3/1/GMVA_slot2_copy/* /mnt/disks/1/2/GMVA_slot2/ # 2) Also add the 'data' files from restored image oper@mark6-08> cp -anv /data/gmva2024_2/recovered_content/*.vdif /mnt/disks/1/2/data/ oper@mark6-08> sudo mount /mnt/disks/1/2/ -oremount,ro
Recording Media
see the: media distribution plan
Current version
Correlation Status
Project Code | Block Code | Sources | DOYS | UT | Freq | Stations | Status | PI | Comment |
---|---|---|---|---|---|---|---|---|---|
f242a | 284 | 86 | European | ||||||
c242a | 284 | 86,43 | Global | ||||||
c242b | 285 | 86 | Global | ||||||
c242c | 286 | 86 | Global | ||||||
c242d | 286 | 86,43 | Global |
General comments
Stations
- Nn has pad N09
- Apex joining for the first time
- Ef is out
Observing Notes
- Fringe test was successful in detecting fringes between Nn, Pv, On, Ys, Mh.
- Pv stowed at ~07:15 UTC until scan 576, stopping again at 20:20 UTC, back at scan 743, then stopped after an hour.
- Nn stowed due to wind at 16:20 UTC and until 23:40 UTC(c242a)
- Apex out initially due to power failure, on source ~13:30 UTC
- Ys lost scans 209-222
- Oct 12 Nn stopped at 14:07
- Pv on sky since 14:30 UTC (scan 320).
- c242c/d Ys very foggy, cloudy
- Pv stopped due to strong wind at 19:45 UTC.
- Oct 13 Nn started with scan at 05:45 but had acquisition problems 07:30-08:30
Mounting the APEX GMVA Module
APEX data are on BHC%0141 in CD502. In order to hand-carry GMVA data from APEX to Bonn, data were consolidated from two Mark6 modules (2 x 8 disks) onto a more readily transportable set of 8 loose disks.
Data of one polarization are in the standard per-disk subdirectory 'data', data of the other polarization are in 'GMVA_slot2'.
To mount the "two modules" contained on BHC%0141, use:
d281 # assuming that BHC%0141 is in slot 1: fuseMk6 -r '/mnt/disks/1/*/data' /`hostname -s`_fuse/1 fuseMk6 -r '/mnt/disks/1/*/GMVA_slot2' /`hostname -s`_fuse/2
Yebes data layout
Info from Javi Gonzales Garcia: we configured our FiLa10G with the following parameters:
2024.284.15:54:25.34/form/wastro 2024.284.15:54:24.83#dbbcn#fila10g/VDIF Frame properties: 2024.284.15:54:24.83#dbbcn#fila10g/ channel width (in bits) : 2 2024.284.15:54:24.83#dbbcn#fila10g/ number of channels per frame : 4 2024.284.15:54:24.83#dbbcn#fila10g/ payload size (in bytes) : 8000 2024.284.15:54:24.83#dbbcn#fila10g/ => frame size (in bytes) : 8032 2024.284.15:54:24.83#dbbcn#fila10g/ => number of frames per second : 128000 (64bit@128MHz) 2024.284.15:54:24.83#dbbcn#fila10g/ => number of data threads : 8 2024.284.15:54:24.83#dbbcn#fila10g/ => number of frames per thread : 16000 (8bit@128MHz)
And cornerturning was on, thus we recorded each VDIF thread in a separate file (8 files in total). Channel mapping to the channel ID in the VEX file would be:
VDIF Thread channel # | DS0 | DS1 | DS2 | DS3 | DS4 | DS5 | DS6 | DS7 |
---|---|---|---|---|---|---|---|---|
1 | &CH01 | &CH05 | &CH09 | &CH13 | &CH17 | &CH21 | &CH25 | &CH29 |
2 | &CH02 | &CH06 | &CH10 | &CH14 | &CH18 | &CH22 | &CH26 | &CH30 |
3 | &CH03 | &CH07 | &CH11 | &CH15 | &CH19 | &CH23 | &CH27 | &CH31 |
4 | &CH04 | &CH08 | &CH12 | &CH16 | &CH20 | &CH24 | &CH28 | &CH32 |
APEX Disk Recovery - for future reference
During unrelated tests at MPIfR, unfortunately the filesystem metadata on 1 out of the 8 disks got erased, i.e., "erased" part of the module. During a later trip to APEX the missing 'GMVA_slot2' files of that disk were copied out from the still existing module there. These were then integrated back into BHC%0141. The missing 'data' files of that disk were less trivial to recover. Nevertheless, full recovery was successful. Module BHC%0141 contains the full original data again. For future reference the steps were:
# Make a low level backup of the wiped disk root@mark6-08> cd /data/gmva2024_2/ root@mark6-08> dd bs=1M if=/dev/sdb of=apex-module-disk1-wiped.raw status=progress root@mark6-08> chmod a-w apex-module-disk1-wiped.raw root@mark6-08> fdisk -lu apex-module-disk1-wiped.raw # Start End Size Type Name 1 2048 15627857919 7.3T Microsoft basic MPIH%024_5 2 15627857920 15628052479 95M Microsoft basic MPIH%024_5m # Grab the XFS file system structure from an intact disk root@mark6-08> cd /data/gmva2024_2/ root@mark6-08> losetup --read-only -o $((512*2048)) /dev/loop1 /dev/sdc root@mark6-08> xfs_metadump -g -f -o -w -a /dev/loop1 apex-module-disk2-intact.xfs_metadump root@mark6-08> losetup -D ; losetup -a # Transplant XFS structure from intact disk onto wiped-disk raw content root@fxmanager> cd /data/gmva2024_2/ root@fxmanager> dd bs=512 if=apex-module-disk1-wiped.raw \ of=recovery-attempt.fs skip=2048 count=$((15627857919-2048+1)) status=progress conv=notrunc root@fxmanager> dd status=progress conv=notrunc bs=512 count=1024 \ seek=15627855872 if=/dev/zero of=recovery-attempt.fs # appends a bit of 0x00 padding root@fxmanager> losetup -v -o 0 /dev/loop0 recovery-attempt.fs root@fxmanager> xfs_mdrestore -g apex-module-disk2-intact.xfs_metadump /dev/loop0 2070 MB read root@fxmanager> mkdir cloop ; mount /dev/loop0 ./cloop/ -txfs -oro # success! # Copy out data from the mounted loop device i.e. from the fixed xfs partition: oper@fxmanager> cd /data/gmva2024_2/ ; mkdir recovered_content oper@fxmanager> cp -anv ./cloop/data/*.vdif ./recovered_content/ oper@fxmanager> mkdir recovered_GMVA_slot2 oper@fxmanager> cp -anv ./cloop/data/*.vdif ./recovered_GMVA_slot2/ # Restore content: init the half-wiped partitions, restore Mk6 metadata root@mark6-08> mkfs.xfs -f /dev/sdb1 root@mark6-08> mkfs.xfs -f /dev/sdb2 root@mark6-08> mount /dev/sdb2 /tmp ; cp -av /mnt/disks/.meta/1/2/* /tmp; umount /tmp # # 1) Add GMVA_slot2 data from new disk from post-GMVA APEX visit # (could actually use ./recovered_GMVA_slot2/, too, but did not get to proceed # with the low-level recovery attempts until after the post-GMVA APEX visit :P) stop & start mk5daemon oper@mark6-08> sudo mount /mnt/disks/1/2/ -oremount,rw root@mark6-08> cp -anv /mnt/disks/3/1/GMVA_slot2_copy/* /mnt/disks/1/2/GMVA_slot2/ # 2) Also add the 'data' files from restored image oper@mark6-08> cp -anv /data/gmva2024_2/recovered_content/*.vdif /mnt/disks/1/2/data/ oper@mark6-08> sudo mount /mnt/disks/1/2/ -oremount,ro
Recording Media
see the: media distribution plan