Content of this page
This page contains 3 different tables.
- The first table contains a list of disk-packs that have been bought by stations and are still active. Stations should keep that table updated and if they withdraw disk-packs from circulation they should be removed from the table
- The second table contains the disk space bought per year and per station to populate disk-packs.
- The third table contains the space disk bought per year and per station to populate Flexbuff units.
CBD decisions
- In the beginning of disk operation the EVN directors agreed that each of the “busy” EVN station should provide at least 150 TB to the EVN disk pool. The other stations should contribute about 2 times the amount of disk space which is recorded at their telescope per session.
- In 2011 the EVN directors agreed that each station should buy 7000€ worth of disk modules per year.
- Additional disks should be bought at the end of 2014 to enable more observing. (See CBD meeeting minutes)
Inventory
Please update the table below
Station | Total TB | 1 TB | 1.3 TB | 1.4 TB | 1.6 TB | 2 TB | 2.4 TB | 3.2 TB | 4 TB | 6 TB | 8 TB | 12 TB | 16 TB | 24 TB | 32 TB | 48 TB | PACKS | Last Updated |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Effelsberg | 2200 | 25 | 0 | 21 | 9 | 11 | 0 | 26 | 37 | 12 | 8 | 5 | 47 | 23 | 4 | 226 | 2017-05-08 | |
Westerbork | 390 | 0 | 2 | 20 | 0 | 36 | 0 | 0 | 12 | 0 | 43 | 0 | 3 | 5 | 82 | 2017-02-14 | ||
Onsala | 412 | 0 | 21 | 0 | 0 | 30 | 0 | 0 | 21 | 0 | 0 | 5 | 5 | 82 | 2014-10-31 | |||
Medicina | 504 | 2 | 0 | 0 | 0 | 22 | 5 | 13 | 11 | 0 | 3 | 0 | 11 | 5 | 72 | 2016-09-14 | ||
Noto | 446 | 10 | 0 | 0 | 0 | 14 | 0 | 20 | 0 | 0 | 6 | 0 | 8 | 3 | 3 | 63 | 2015-11-12 | |
Jodrell | 1086 | 0 | 0 | 0 | 4 | 14 | 0 | 17 | 0 | 13 | 29 | 0 | 17 | 16 | 110 | 2017-04-24 | ||
Seshan | 110 | 0 | 2 | 0 | 2 | 10 | 0 | 0 | 13 | 0 | 4 | 0 | 0 | 31 | 2012-12-10 | |||
Urumqi | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2015-01-23 | |||
Hartrao | 384 | 0 | 0 | 0 | 0 | 0 | 0 | 10 | 2 | 0 | 15 | 0 | 4 | 5 | 36 | 2018-08-31 | ||
Torun | 257 | 0 | 0 | 0 | 2 | 27 | 0 | 0 | 8 | 0 | 132 | 0 | 0 | 2 | 52 | 2016-09-14 | ||
Yebes | 325 | 0 | 0 | 0 | 3 | 2 | 0 | 0 | 15 | 0 | 12 | 0 | 4 | 3 | 38 | 2015-08-20 | ||
Arecibo | 20 | 0 | 0 | 0 | 0 | 10 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 | ? | |||
Metsähovi | 78 | 0 | 0 | 0 | 0 | 2 | 4 | 2 | 3 | 1 | 3 | 0 | 2 | 16 | 2014-10-13 | |||
Sardinia | 128 | 16 | 56 | 9 | 2015-10-29 | |||||||||||||
KVAZAR | 160 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 20 | 0 | 0 | 20 | 2015-11-09 | |||
JIVE | 206 | 5 | 0 | 10 | 0 | 53 | 0 | 0 | 0 | 0 | 10 | 0 | 0 | 78 | 2012-08-21 | |||
Total | ~6600 | |||||||||||||||||
EVN use | ~2400 |
2) 5 modules are 4-packs with 2-TB disks.
3) 4 modules are 4-packs with 2-TB disks. The 4TB disks bought in Dec 2014 & Dec 2015 are used to convert to a mixture of 16 TB packs and 32 TB packs (8=3x16 and 5x32TB in Jan 2017). JIVE correlator staff do the conversion when diskpacks are released. Total TBs reported are currently available (some 4TBTB disks have yet to be places and 3 remain as spare).
Note:
- Effelsberg uses ca. 600 TB for GMVA. RA contribution is for EVN: ?? TB.
- Effelsberg: Will remove all our small modules with less than 2 TB. (April 2010)
- Effelsberg: Will try to avoid sending modules smaller than 3.2 TB. (January 2011)
- 2 TB disks can only be handled with SDK8.2 and SDK8.3 with patch applied.
- Modules bigger than 8 TB require SDK 9 which is now officially supported by the field operation.
- Effelsberg upgraded 4x 8TB modules to 48 TB. (March 2017)
Disk purchases for disk-packs
The directors agreed in 2011 that each station should buy disk-modules for 7000 € per year.
Year | Ef | Hh | Jb | KVAZAR | Mc | Mh | Nt | On | Sh+T6 | Sr | Tr | Ur | Wb | Ys | Total (TB) |
2011 | 32 | 80 | 48 | 48 | 12 | 24 | 196 | ||||||||
2012 | 64 | 40 | 80 | 112 | 32 | 64 | 80 | 32 | 504 | ||||||
2013 | 80 | 64 | 80 | 48 | 32 | 64 | 80 | 16 | 40 | 96 | 32 | 632 | |||
2014 | 96 | 150 | 32 | 16 | 72 | 150 | 112 | 152 | 72 | 848 | |||||
2015 | 160 | 192 | 64 | 32 | 192 | 96 | 96 | 832 | |||||||
2016 | 288 | 96 | 64 | 288* | 736 | ||||||||||
2017 | 160 | 96 | 40 | 120* | 416 | ||||||||||
2018 | 0 | 240* | 128* | 368 | |||||||||||
Total (TB) | 272 | 344 | 550 | 160 | 272 | 60 | 232 | 502 | 128 | 144 | 96 | 960 | 310 | 4208 |
Flexbuf purchases (disk space)
Year | Ef (S+J) | Hh (S+J) | KVAZAR | Ir (S+J) | Jb (S+J) | Mc (S+J) | Mh (S+J) | Nt (S+J) | On (S+J) | Sr (S+J) | Wb (S+J) | Ys (S+J) | Total (1) | |||||||||||
2015 | 128 | 324 | 452 | |||||||||||||||||||||
2016 | 144 | (144) | 288 | (144) | 216 | 144 | 865 | |||||||||||||||||
2017 | 192 | 144 | 288 | 160 | 120 | 160 | 120 | 288 | 834 | |||||||||||||||
2018 | 80 | 288 | 288 | 200 | 103 | 360 | 360 | 360 | 300 | 360 | ||||||||||||||
2019 | 360 | 240 | 60 | |||||||||||||||||||||
Total (TB) | 464 | 454 | 80 | 576 | 768 | 244 | 196 | 244 | 684 | 720 | 202 | 828 | 5264 |
Notes:
- Units in TB. Original space at the units
- Codes: S -> Flexbuff at the station. J - > Flexbuff at JIVE correlator
- (1) Flexbuff size at JIVE is reduced due to using a RAID. In the total and partial sums this is taken into account, but it is not taken into account in the individual numbers per station and year.
- 144 TB -> 101 TB
- 240 TB -> 168 TB
- 288 TB -> 202 TB
- 360 TB -> 252 TB
- Ef uses a Mark6 and modules which can be replaced. Thus more modules can be used if needed. Latest purchase in 2017 is 16 x 10 TB disks.
- The Hh Flexbuff also uses the same RAID set up as JIVE does.
- A standard 36 unit may use 4, 6 or 8 TB disks.
- 4 TB x 36 = 144 TB;
- 6 TB x 36 = 216 TB
- 8 TB x 36 = 288 TB
- Approximate costs:
- 2015: 44 € /TB (from 4 TB NAS disks)
- 2016:
- 2017: 50 €/TB (from 8 TB NAS disks)
- 2018: (March) ~28€/TB (from 10TB Seagate Ironwolf, €281 inclusive VAT in the Netherlands)
- Replacing disks at Flexbuffs:
- Ys replaces 36 x 4 TB disks by 36 x 10 TB disks. 32 x 4 TB disks are used to populate 4 Mark5B packs. 4 x 4TB are recovered by Ys.
- Wb bought 36 (12 in Nov. 2017 and 24 in Mar. 2018) 10TB disks so that JIVE could upgrade FlexBuffs with larger disks.
- Hh replaces 36 x 4 TB disks by 36 x 10 TB disks. All 36 x 4 TB disks are recovered by Hh.
Disks which seem to work well
2 TB
- Hitachi Ultrastar A7K3000 HUA723020ALA640 2 TB Internal Hard Drive - Walter A.
- We bought 60 Western Digital 2TB WD20EFRX-68EUZN0 for 83Euros+VAT each one. Disks are cheap and perform well. Only one nasty issue: the internal idle3 timer must be disabled on each one, to avoid delay at record start. Giuseppe (Medicina, Italy)
- We bought 32 disks: Weestern Digital 2TB ED20EFRX for 80 euros + VAT each. Following Giuseppe's advice we ran idle3ctl on all of them to disable the timer (Yebes).
3 TB
I heard rumors from Haystack that disks from Hitachi and Seagate seem to work well. Durability is still untested. Hopefully we can have more details by end of January 2014 (TOG meeting). - Walter A.
Today 14.1.2014 we ordered HGST HUA723030ALA640 3 TB (Hitachi SATA 600, Ultrastar A7K3000) for 2 modules for test purposes. We have them working in RAIDS here without problems. Cost 165€ + VAT - Walter A.
4 TB
We bought HGST 4 TB Hitachi Ultrastar 7K4000 SATA III (HUS724040ALA640 0F14688) disks. The 3 modules we made conditioned well. Cost 211 € + VAT. - Walter A. 28.4.14
In Nov 2014, ASTRON also bought (30 pieces) Hitachi Ultrastar 7K4000 SATA III disks for €200 (+21% VAT). Antonis Polatidis 9.12.14
In Mar 2105, Hart bought (40 pieces) Seagate 4TB NAS series ST4000VN000 disks for ~167 euro (+14% VAT) Jonathan Quick 1.4.15 - performance TBD
In April 2015, Ys bought 8 disks WD40EFRX NAS Series for 177 €. P de Vicente (28-4-2015). 16 additional 32 Tb disk were bought in August 2015.
Medicina bought 16 WD40EFRX red NAS disks in March 2015 at 123E. After the idle3 timer removal they perform well, so far.
Other 41 disks of same type were bought last december (2015) at 138.95E.
6 TB
We bought HGST 6 TB Hitachi He filled disks (need them for high altitude). They will be used in modules for the Mark 6. Price about 360 € + VAT (bought 128). The disks performed well at 5000 m altitude in Mark 6. - Walter A. 1.10.2014 & 19.2.2015
8 TB
We bought HGST 8 TB HGST He filled disks.They work well in Mark 6 modules. W.A. (April 2017)
In the latest batch we found that they write faster than read. This should be no problem for playback at a software correlator, but for firmware/hardware correlators with a fixed clock this might be different.
10 TB
10TB Seagate Ironwolf (ST10000VN0004 7200), work well in FlexBuffs (advice form JIVE, - A. Polatidis, Nov 2017) (agreed, works well also at Jodrell - E. Varenius, may 2018)
ASTRO / GEO VSN assignments at EVN stations
The information below may be out of date. Please update!!
- HART-001 thru HART-020 are geo modules
- HART-021 thru HART-052 are astro modules
- HART+100 up to +499 will be geo modules
- HART+500 and above will be astro modules
- Medicina, label number above 1000 are geo modules, below are astro module (ie med-1001 stands for geo modules).
- Noto: "NOT" are geo modules (i.e. NOT-0001) - "NTO" are astro modules (i.e. NTO-0001)
- Onsala: Astro disks have a VSN number below 100, so it will look something like this: OSOD-001
- Onsala Geo disks have a VSN number above 100, so it will look something like this: OSOD-101
- Yebes astrodisks have a VSN number below 100: OAN-00xx (PATA) and OAN+00xx (SATA, up to OAN+0025)
- Yebes geodesy disks have a VSN number above 100: OAN-01xx
- SHAO: Geo/Astro split unknown
- XAO (Ur): XAO#1xxx for Astro, XAO#0xxx for Geo and XAO#Txxx for local Testing (IDE: #=- , SATA: #=+)
- KVAZAR: Geo/Astro disks used for EVN have a VSN format: IAAE-xxx.