Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ZOL 0.8.1 does not appear to honor recordsize=1M on zfs recv #9347

Closed
kneutron opened this issue Sep 22, 2019 · 10 comments
Closed

ZOL 0.8.1 does not appear to honor recordsize=1M on zfs recv #9347

kneutron opened this issue Sep 22, 2019 · 10 comments

Comments

@kneutron
Copy link

kneutron commented Sep 22, 2019

System information

Type | Version/Name
Linux | Ubuntu 19.04
Distribution Name | Ubuntu Disco
Distribution Version |
Linux Kernel | 5.0.0-27-generic #28-Ubuntu SMP
Architecture | x86_64
ZFS Version | # zpool version
zfs-0.8.1-1
zfs-kmod-0.8.1-1

SPL Version | # modinfo spl | grep -iw version
version: 0.8.1-1

Describe the problem you're observing

Backing up a 6x2TB-disk mirror to a single USB3 drive (before converting it to a RAIDZ2), observed Writes/sec should be roughly equivalent to MB/sec written but they are higher sometimes (1,000+/sec) when observed using ' zpool iostat '

(source)  pool: zseatera2
 state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: http://zfsonlinux.org/msg/ZFS-8000-8A
  scan: scrub repaired 0B in 0 days 02:32:22 with 1 errors on Sun Sep 15 03:59:09 2019
config:
        NAME                                 STATE     READ WRITE CKSUM
        zseatera2                            ONLINE       0     0     0
          mirror-0                           ONLINE       0     0     0
            ata-ST2000VN000-1HJ164_A  ONLINE       0     0     0
            ata-ST2000VN000-1HJ164_B  ONLINE       0     0     0
          mirror-1                           ONLINE       0     0     0
            ata-ST2000VN000-1HJ164_C  ONLINE       0     0     0
            ata-ST2000VN000-1HJ164_D  ONLINE       0     0     0
          mirror-2                           ONLINE       0     0     0
            ata-ST2000VN000-1HJ164_E  ONLINE       0     0     0
            ata-ST2000VN004-2E4164_F  ONLINE       0     0     0
errors: Permanent errors have been detected in the following files:
        <0x1b2>:<0xa0c8>

(destination)  pool: zwd6t
 state: ONLINE
  scan: scrub repaired 0B in 0 days 02:57:07 with 0 errors on Sun Sep 22 05:58:47 2019
config:
        NAME                                                 STATE     READ WRITE CKSUM
        zwd6t                                                ONLINE       0     0     0
          usb-WD_Elements_25A3_57583231443339445-0:0  ONLINE       0     0     0
errors: No known data errors

Note - Had to manually copy and delete a few datasets before doing ' zfs send ' to the 6TB drive due to " cannot receive: invalid stream (bad magic number) " errors, which were detected with additional '-v' in send/recv and worked around. Source pool was created on Ubuntu 14.04 LTS and has not been upgraded yet:

zpool upgrade

This system supports ZFS pool feature flags.
All pools are formatted using feature flags.

Some supported features are not enabled on the following pools. Once a
feature is enabled the pool may become incompatible with software
that does not support the feature. See zpool-features(5) for details.

POOL FEATURE

zseatera2
multi_vdev_crash_dump
large_dnode
sha512
skein
edonr
userobj_accounting
encryption
project_quota
device_removal
obsolete_counts
zpool_checkpoint
spacemap_v2
allocation_classes
resilver_defer
bookmark_v2

Note - Destination pool was created under ZFS 0.8.1 and has all features enabled by default.

# zfs get recordsize
NAME                                                                        PROPERTY    VALUE    SOURCE
zseatera2                                                                   recordsize  128K     default
zseatera2@Sat                                                               recordsize  -        -
zseatera2/dv                                                                recordsize  128K     default
zseatera2/dv@Sat                                                            recordsize  -        -
zseatera2/from-p3300-vbox-bkp-20180426-virtbox-virtmachines                 recordsize  128K     default
zseatera2/from-p3300-vbox-bkp-20180426-virtbox-virtmachines@Sat             recordsize  -        -
zseatera2/notshrcompr                                                       recordsize  128K     default
zseatera2/notshrcompr@Sat                                                   recordsize  -        -
zseatera2/notshrcompr/bkp-virtualbox-vms--p2700quad1404                     recordsize  1M       local
zseatera2/notshrcompr/bkp-virtualbox-vms--p2700quad1404@Sat                 recordsize  -        -
zseatera2/vboxtest                                                          recordsize  128K     default
zseatera2/vboxtest@Sat                                                      recordsize  -        -

zwd6t                                                                       recordsize  128K     default
zwd6t@Sat                                                                   recordsize  -        -
zwd6t/dv-shrcompr                                                           recordsize  1M       local
zwd6t/dv-shrcompr/bkp-vmware-vms--p3300lts                                  recordsize  1M       local
zwd6t/from-p3300-vbox-bkp-20180426-virtbox-virtmachines                     recordsize  1M       local
zwd6t/from-p3300-zbkpteratmp-compr                                          recordsize  1M       local
zwd6t/from-zseatera2                                                        recordsize  1M       local
zwd6t/from-zseatera2@Sat                                                    recordsize  -        -
zwd6t/from-zseatera2/dv                                                     recordsize  1M       inherited from zwd6t/from-zseatera2
zwd6t/from-zseatera2/dv@Sat                                                 recordsize  -        -
zwd6t/from-zseatera2/from-p3300-vbox-bkp-20180426-virtbox-virtmachines      recordsize  1M       inherited from zwd6t/from-zseatera2
zwd6t/from-zseatera2/from-p3300-vbox-bkp-20180426-virtbox-virtmachines@Sat  recordsize  -        -
zwd6t/from-zseatera2/notshrcompr                                            recordsize  1M       inherited from zwd6t/from-zseatera2
zwd6t/from-zseatera2/vboxtest                                               recordsize  1M       inherited from zwd6t/from-zseatera2
zwd6t/from-zseatera2/vboxtest@Sat                                           recordsize  -        -
zwd6t/notshrcompr                                                           recordsize  1M       local
zwd6t/vboxtest                                                              recordsize  1M       local

Describe how to reproduce the problem

(date; time zfs send -LecvvR zseatera2@Sat |pv -t -r -b -W -i 2 -B 200M |zfs recv -svv -o recordsize=1024k zwd6t/from-zseatera2; date)
2>~/zfs-send-errs.txt

^ should translate ALL incoming datasets to 1M recordsize regardless, according to ' man zfs '

( from ' zpool iostat -k 5 ' )

                                                       capacity     operations     bandwidth
pool                                                 alloc   free   read  write   read  write
---------------------------------------------------  -----  -----  -----  -----  -----  -----
zseatera2                                            1.74T  3.69T  1.67K      0   110M      0
  mirror                                              580G  1.25T    553      0  35.8M      0
    ata-ST2000VN000-1HJ164_A                      -      -    310      0  19.6M      0
    ata-ST2000VN000-1HJ164_B                      -      -    242      0  16.2M      0
  mirror                                              597G  1.23T    589      0  37.2M      0
    ata-ST2000VN000-1HJ164_C                      -      -    435      0  27.1M      0
    ata-ST2000VN000-1HJ164_D                      -      -    153      0  10.1M      0
  mirror                                              608G  1.22T    571      0  36.6M      0
    ata-ST2000VN000-1HJ164_E                      -      -    504      0  31.9M      0
    ata-ST2000VN004-2E4164_F                      -      -     67      0  4.71M      0
---------------------------------------------------  -----  -----  -----  -----  -----  -----
zwd6t                                                2.11T  3.34T      0  1.57K      0   128M
  usb-WD_Elements_25A3_57583231443339-0:0  2.11T  3.34T      0  1.57K      0   128M

The pool is still copying over at time of post and the properties appear to be set, but observed I/O would seem to indicate smaller record sizes being written with send/recv - where when the data was copied over manually, the desired recordsize and associated I/O matched up.

Include any warning/errors/backtraces from the system logs

@DeHackEd
Copy link
Contributor

No. The recordsize of existing data is preserved in the send stream itself. That's why the option zfs send -L exists in the first place.

should translate ALL incoming datasets to 1M recordsize regardless, according to ' man zfs '

Can you cite this? You likely misinterpreted it, and/or this is a documentation issue.

@kneutron
Copy link
Author

kneutron commented Sep 22, 2019

From ' man zfs ' ( for 0.8.1 )

zfs receive [-Fhnsuv] [-d|-e] [-o origin=snapshot] [-o property=value] [-x property] filesystem

If -o property=value or -x property is specified, it applies to the effective value of the property
throughout the entire subtree of replicated datasets. Effective property values will be set ( -o ) or inherited ( -x ) on the topmost in the replicated subtree. In descendant datasets, if the property is set by the send stream, it will be overridden by forcing the property to be inherited from the top‐most file system. Received properties are retained in spite of being overridden and may be restored with zfs inherit -S.
.
.(snip)
.
-o property=value

Sets the specified property as if the command zfs set property=value was invoked immediately before the receive. When receiving a stream from zfs send -R, causes the property to be inherited by all descendant datasets, as through zfs inherit property was run on any descendant datasets that have this property set on the sending system.

       Any editable property can be set at receive time. Set-once properties bound to the received data, such as normalization and casesensitivity, cannot be set at receive time even when the datasets are newly created by zfs receive.  Additionally both settable properties version and volsize cannot be set at receive time.

The -o option may be specified multiple times, for different properties. An error results if the same property is specified in multiple -o or -x options.

@thulle
Copy link

thulle commented Sep 23, 2019

"The receiving system must have the large_blocks pool feature enabled as well."

I can't see that as a listed feature flag on your pool

@ahrens
Copy link
Member

ahrens commented Sep 23, 2019

@kneutron Do I understand correctly that you are expecting zfs recv -o recordsize=1024k to result in all your files having 1MB blocksize? As you probably know, changing the recordsize only effects newly created files. However, the recordsize property has no effect on received files. I can see how this would lead to confusion, since the recordsize property is usually taken into account when files are created. However, for this purpose files are not "created" by zfs receive.

@kneutron kneutron closed this as completed Oct 5, 2019
@kneutron
Copy link
Author

kneutron commented Oct 5, 2019

OK to close

@delner
Copy link

delner commented Jan 21, 2024

Do I understand correctly that you are expecting zfs recv -o recordsize=1024k to result in all your files having 1MB blocksize? As you probably know, changing the recordsize only effects newly created files. However, the recordsize property has no effect on received files. I can see how this would lead to confusion, since the recordsize property is usually taken into account when files are created. However, for this purpose files are not "created" by zfs receive.

@ahrens Is this still true of ZFS 2.x+? That send/receive won't resize records?

I was looking at this OpenZFS documentation which suggested send/receive is a way to resize records:

ZFS datasets use a default internal recordsize of 128KB. The dataset recordsize is the basic unit of data used for internal copy-on-write on files. Partial record writes require that data be read from either ARC (cheap) or disk (expensive). recordsize can be set to any power of 2 from 512 bytes to 128 kilobytes. Software that writes in fixed record sizes (e.g. databases) will benefit from the use of a matching recordsize.

Changing the recordsize on a dataset will only take effect for new files. If you change the recordsize because your application should perform better with a different one, you will need to recreate its files. A cp followed by a mv on each file is sufficient. Alternatively, send / recv should recreate the files with the correct recordsize when a full receive is done.

In my case, I'm replicating a large dataset with recordsize 128K, whose files are probably better suited to 1M, to a new dataset whose recordsize is 1M, with zfs send -L <path/to/source>@<tag> | zfs receive -o recordsize=1M <path/to/dest>. Wondering if receive does or doesn't do this resize, because cp/mv is painfully slow. If you know a way to check the recordsize for a file in a dataset, that might also help me answer my own question.

Thanks!

@amotin
Copy link
Member

amotin commented Jan 22, 2024

zdb is able to print block pointers and so block sizes for specified file. I am not sure what that documentation quote means, but AFAIK send/receive can handle different block sizes only in a limited subset of cases, I would not call it a feature, but instead assume that received copy is identical to the source.

@delner
Copy link

delner commented Jan 22, 2024

Good to know. Is there an example command of how to use zdb to print those block pointers? With that, I could experiment with some toy datasets, try to confirm the behavior on my version of ZFS. Thanks!

@amotin
Copy link
Member

amotin commented Jan 22, 2024

zdb -vvvv dataset object. Object number matches inode number of the file, that you can get from ls -i.

@delner
Copy link

delner commented Jan 22, 2024

I tried this on a VM, and I think the results show that for 2.1.5 it doesn't adjust recordsize for pre-existing files piped into receive. I got the same results with 2.2.2 compiled from source in the same environment, but zfs version still showed zfs-kmod-2.1.5-1ubuntu6~22.04.1, so I don't know if that polluted my test or not.

The key takeaway I observed with zdb -vvvv was the migrated files had dblk=128K and same block pointers in the 1M dataset as the 128K dataset, whereas identical but newly generated versions of these same files (initialized solely within the 1M dataset) had dblk=1M and a shorter list of block pointers with larger increments between addresses. This suggested to me that after migration with receive, block pointers were not adjusted to match the new dataset's recordsize.

In actual practice on a 2.2.2 ZFS system, I also inspected one of my files created originally on a 128K recordsize dataset, and mv'd to a 1M recordsize dataset: it also reflected this new dblk=1M and a shorter list of block pointers with larger increments between addresses.

This corroborates @ahrens and @amotin 's assertions that receive will not (or at least will not consistently) adjust existing records... hence it looks like I'm stuck with a cp/mv. Good to at least clear up the confusion... perhaps the documentation should be updated accordingly.


Here's what I tried on my toy dataset.

Setting up the pools:

# Collect metadata
$ zfs version
zfs-2.1.5-1ubuntu6~22.04.2
zfs-kmod-2.1.5-1ubuntu6~22.04.1

# Create pools
$ zpool create small sdb
$ zpool create big sdc

Then setting up the source data with 128K and 1M files:

# Create dataset and test files 
$ zfs create -o recordsize=128K small/a
$ dd if=/dev/urandom of=/small/a/128k bs=128K count=64
64+0 records in
64+0 records out
8388608 bytes (8.4 MB, 8.0 MiB) copied, 0.0185861 s, 451 MB/s
$ dd if=/dev/urandom of=/small/a/1m bs=1M count=64
64+0 records in
64+0 records out
67108864 bytes (67 MB, 64 MiB) copied, 0.242175 s, 277 MB/s

Then inspecting the source objects in the 128K dataset:

# Get object IDs
$ ls -i /small/a/
2 128k  3 1m

# Print block pointers for objects
# 128K
$ zdb -vvvv small/a 2
Dataset small/a [ZPL], ID 162, cr_txg 41929, 72.1M, 8 objects, rootbp DVA[0]=<0:30080fc00:200> DVA[1]=<0:32000f800:200> [L0 DMU objset] fletcher4 lz4 unencrypted LE contiguous unique double size=1000L/200P birth=41944L/41944P fill=8 cksum=119baafd95:64691381380:12b6c54929e5f:26d8be3ff485a7

    Object  lvl   iblk   dblk  dsize  dnsize  lsize   %full  type
         2    2   128K   128K  8.01M     512     8M  100.00  ZFS plain file
                                               176   bonus  System attributes
	dnode flags: USED_BYTES USERUSED_ACCOUNTED USEROBJUSED_ACCOUNTED 
	dnode maxblkid: 63
	path	/128k
	uid     0
	gid     0
	atime	Mon Jan 22 15:45:35 2024
	mtime	Mon Jan 22 15:45:35 2024
	ctime	Mon Jan 22 15:45:35 2024
	crtime	Mon Jan 22 15:45:35 2024
	gen	41934
	mode	100644
	size	8388608
	parent	34
	links	1
	pflags	840800000004
Indirect blocks:
               0 L1  0:240013c00:e00 20000L/e00P F=64 B=41934/41934 cksum=17df6dcf401:298572fb2b41f:2ec7b7e6b835587:855485b7b3425996
               0  L0 0:2c4031200:20000 20000L/20000P F=1 B=41934/41934 cksum=400b704282a6:100350f8dea6a388:ee623428dda6e67d:5003c476d7f70b48
           20000  L0 0:2c4051200:20000 20000L/20000P F=1 B=41934/41934 cksum=4008b5a35a8c:100949c422fe9afe:707dfc314caef630:cd9a32279cfa401b
...
          7e0000  L0 0:2c4811200:20000 20000L/20000P F=1 B=41934/41934 cksum=3fc27d0b11dc:fecf30c0d97dd10:f6ffee6457b02ce2:759fdbc1c98600db

		segment [0000000000000000, 0000000000800000) size    8M

# 1M
$ zdb -vvvv small/a 3
Dataset small/a [ZPL], ID 162, cr_txg 41929, 72.1M, 8 objects, rootbp DVA[0]=<0:30080fc00:200> DVA[1]=<0:32000f800:200> [L0 DMU objset] fletcher4 lz4 unencrypted LE contiguous unique double size=1000L/200P birth=41944L/41944P fill=8 cksum=119baafd95:64691381380:12b6c54929e5f:26d8be3ff485a7

    Object  lvl   iblk   dblk  dsize  dnsize  lsize   %full  type
         3    2   128K   128K  64.0M     512    64M  100.00  ZFS plain file
                                               176   bonus  System attributes
	dnode flags: USED_BYTES USERUSED_ACCOUNTED USEROBJUSED_ACCOUNTED 
	dnode maxblkid: 511
	path	/1m
	uid     0
	gid     0
	atime	Mon Jan 22 15:45:50 2024
	mtime	Mon Jan 22 15:45:50 2024
	ctime	Mon Jan 22 15:45:50 2024
	crtime	Mon Jan 22 15:45:50 2024
	gen	41937
	mode	100644
	size	67108864
	parent	34
	links	1
	pflags	840800000004
Indirect blocks:
               0 L1  0:2c4839200:5400 20000L/5400P F=512 B=41938/41938 cksum=8c3d33b70dc:5b68ae5b03852b:7a4dfd2d509b0af4:20e61564f59051df
               0  L0 0:240015200:20000 20000L/20000P F=1 B=41937/41937 cksum=3fbf9e060c4f:ff48e103cbd6e6b:2f677165f31a327a:d531e232628e3c8f
           20000  L0 0:240035200:20000 20000L/20000P F=1 B=41937/41937 cksum=4017ccf5b11a:1016578bddc0e6cd:627677dd8cc2d07c:d3228741014e7823
...
         3fe0000  L0 0:243ff5e00:20000 20000L/20000P F=1 B=41938/41938 cksum=3fd3412d69d1:ff32266d31c720a:f20445c8e7a48b52:7c112e4de1c5b7cd

		segment [0000000000000000, 0000000004000000) size   64M

Then migrating the source data to the 1M dataset with send/receive, and inspecting the objects:

# Migrate data to destination pool
$ zfs snapshot small/a@a
$ zfs send -L small/a@a | zfs receive -o recordsize=1M big/a

# Inspect the destination pool
$ zfs get recordsize big/a
NAME   PROPERTY    VALUE    SOURCE
big/a  recordsize  1M       local

# Inspect migrated objects
$ ls -i /big/a/
2 128k  3 1m

# 128K
$ zdb -vvvv big/a 2
Dataset big/a [ZPL], ID 157, cr_txg 42093, 72.1M, 8 objects, rootbp DVA[0]=<0:244004c00:200> DVA[1]=<0:260004c00:200> [L0 DMU objset] fletcher4 lz4 unencrypted LE contiguous unique double size=1000L/200P birth=42102L/42102P fill=8 cksum=118cee71bc:6662aeafb4e:1380214832ed2:293dbfd7c06929

    Object  lvl   iblk   dblk  dsize  dnsize  lsize   %full  type
         2    2   128K   128K  8.01M     512     8M  100.00  ZFS plain file
                                               176   bonus  System attributes
	dnode flags: USED_BYTES USERUSED_ACCOUNTED USEROBJUSED_ACCOUNTED 
	dnode maxblkid: 63
	path	/128k
	uid     0
	gid     0
	atime	Mon Jan 22 15:45:35 2024
	mtime	Mon Jan 22 15:45:35 2024
	ctime	Mon Jan 22 15:45:35 2024
	crtime	Mon Jan 22 15:45:35 2024
	gen	41934
	mode	100644
	size	8388608
	parent	34
	links	1
	pflags	840800000004
Indirect blocks:
               0 L1  0:240000a00:e00 20000L/e00P F=64 B=42095/42095 cksum=17d47678a37:296659135a77a:2e96631fc2f3235:81df627e87194907
               0  L0 0:280002200:20000 20000L/20000P F=1 B=42095/42095 cksum=400b704282a6:100350f8dea6a388:ee623428dda6e67d:5003c476d7f70b48
           20000  L0 0:280022200:20000 20000L/20000P F=1 B=42095/42095 cksum=4008b5a35a8c:100949c422fe9afe:707dfc314caef630:cd9a32279cfa401b
...
          7e0000  L0 0:2807e2200:20000 20000L/20000P F=1 B=42095/42095 cksum=3fc27d0b11dc:fecf30c0d97dd10:f6ffee6457b02ce2:759fdbc1c98600db

		segment [0000000000000000, 0000000000800000) size    8M

# 1M
$ zdb -vvvv big/a 3
Dataset big/a [ZPL], ID 157, cr_txg 42093, 72.1M, 8 objects, rootbp DVA[0]=<0:244004c00:200> DVA[1]=<0:260004c00:200> [L0 DMU objset] fletcher4 lz4 unencrypted LE contiguous unique double size=1000L/200P birth=42102L/42102P fill=8 cksum=118cee71bc:6662aeafb4e:1380214832ed2:293dbfd7c06929

    Object  lvl   iblk   dblk  dsize  dnsize  lsize   %full  type
         3    2   128K   128K  64.0M     512    64M  100.00  ZFS plain file
                                               176   bonus  System attributes
	dnode flags: USED_BYTES USERUSED_ACCOUNTED USEROBJUSED_ACCOUNTED 
	dnode maxblkid: 511
	path	/1m
	uid     0
	gid     0
	atime	Mon Jan 22 15:45:50 2024
	mtime	Mon Jan 22 15:45:50 2024
	ctime	Mon Jan 22 15:45:50 2024
	crtime	Mon Jan 22 15:45:50 2024
	gen	41937
	mode	100644
	size	67108864
	parent	34
	links	1
	pflags	840800000004
Indirect blocks:
               0 L1  0:2c0008e00:5400 20000L/5400P F=512 B=42096/42096 cksum=8ae5db0c3f8:5b8f757206d70c:7f388eb9589f02c0:2eacf4f8242d8abd
               0  L0 0:240001a00:20000 20000L/20000P F=1 B=42095/42095 cksum=3fbf9e060c4f:ff48e103cbd6e6b:2f677165f31a327a:d531e232628e3c8f
           20000  L0 0:240021a00:20000 20000L/20000P F=1 B=42095/42095 cksum=4017ccf5b11a:1016578bddc0e6cd:627677dd8cc2d07c:d3228741014e7823
...
         3fe0000  L0 0:243fe3200:20000 20000L/20000P F=1 B=42096/42096 cksum=3fd3412d69d1:ff32266d31c720a:f20445c8e7a48b52:7c112e4de1c5b7cd

		segment [0000000000000000, 0000000004000000) size   64M

Then I generated some fresh objects directly to the 1M dataset to compare to:

# Create new, equivalent objects in destination pool
# to compare blocks against migrated objects.
$ dd if=/dev/urandom of=/big/a/128k-new bs=128K count=64
64+0 records in
64+0 records out
8388608 bytes (8.4 MB, 8.0 MiB) copied, 0.0532519 s, 158 MB/s
$ dd if=/dev/urandom of=/big/a/1m-new bs=1M count=64
64+0 records in
64+0 records out
67108864 bytes (67 MB, 64 MiB) copied, 0.195279 s, 344 MB/s

# Get object IDs
$ ls -i /big/a/
2 128k  4 128k-new  3 1m  5 1m-new

# Print block pointers for objects
# 128K
$ zdb -vvvv big/a 4
Dataset big/a [ZPL], ID 157, cr_txg 42093, 144M, 10 objects, rootbp DVA[0]=<0:360802800:200> DVA[1]=<0:400002800:200> [L0 DMU objset] fletcher4 lz4 unencrypted LE contiguous unique double size=1000L/200P birth=44142L/44142P fill=10 cksum=125681dbcc:69dbb97f736:13f6402b8a072:29e4daf6d7d99d

    Object  lvl   iblk   dblk  dsize  dnsize  lsize   %full  type
         4    2   128K     1M  8.00M     512     8M  100.00  ZFS plain file
                                               176   bonus  System attributes
	dnode flags: USED_BYTES USERUSED_ACCOUNTED USEROBJUSED_ACCOUNTED 
	dnode maxblkid: 7
	path	/128k-new
	uid     0
	gid     0
	atime	Mon Jan 22 18:51:35 2024
	mtime	Mon Jan 22 18:51:35 2024
	ctime	Mon Jan 22 18:51:35 2024
	crtime	Mon Jan 22 18:51:35 2024
	gen	44118
	mode	100644
	size	8388608
	parent	34
	links	1
	pflags	840800000004
Indirect blocks:
               0 L1  0:380000000:400 20000L/400P F=8 B=44118/44118 cksum=a4290a4330:514edb81a7dc:187bc8dde267c9:5a450aeac401020
               0  L0 0:360000000:100000 100000L/100000P F=1 B=44118/44118 cksum=1ffc447b1cc0e:ff83c15c03df0197:728796f1d66aa7bf:739d2ef8ade67ab9
          100000  L0 0:360100000:100000 100000L/100000P F=1 B=44118/44118 
...
          700000  L0 0:360700000:100000 100000L/100000P F=1 B=44118/44118 cksum=200a0413dc810:19aa2bdba5ce817:5995aefd091e8a3e:c75b7eddce0ceccb

		segment [0000000000000000, 0000000000800000) size    8M

# 1M
$ zdb -vvvv big/a 5
Dataset big/a [ZPL], ID 157, cr_txg 42093, 144M, 10 objects, rootbp DVA[0]=<0:360802800:200> DVA[1]=<0:400002800:200> [L0 DMU objset] fletcher4 lz4 unencrypted LE contiguous unique double size=1000L/200P birth=44142L/44142P fill=10 cksum=125681dbcc:69dbb97f736:13f6402b8a072:29e4daf6d7d99d

    Object  lvl   iblk   dblk  dsize  dnsize  lsize   %full  type
         5    2   128K     1M  64.0M     512    64M  100.00  ZFS plain file
                                               176   bonus  System attributes
	dnode flags: USED_BYTES USERUSED_ACCOUNTED USEROBJUSED_ACCOUNTED 
	dnode maxblkid: 63
	path	/1m-new
	uid     0
	gid     0
	atime	Mon Jan 22 18:51:48 2024
	mtime	Mon Jan 22 18:51:48 2024
	ctime	Mon Jan 22 18:51:48 2024
	crtime	Mon Jan 22 18:51:48 2024
	gen	44121
	mode	100644
	size	67108864
	parent	34
	links	1
	pflags	840800000004
Indirect blocks:
               0 L1  0:384002600:e00 20000L/e00P F=64 B=44122/44122 cksum=198960c7da4:2ca53cf51a8b8:3264de4c50fbd4c:b67f2df0bc81b14b
               0  L0 0:380001000:100000 100000L/100000P F=1 B=44121/44121 cksum=20019621950e1:4a9a45612457a7:be5ceb8b80da5fd3:9c45db38a885ea5a
          100000  L0 0:380101000:100000 100000L/100000P F=1 B=44121/44121 cksum=1ff48ec8ce415:ff729cb9af46687b:e4f0d3dd5e861808:1bb92ee402767350
...
         3f00000  L0 0:383f02600:100000 100000L/100000P F=1 B=44122/44122 cksum=1ffacd220ead5:ff38e0124f4d2f65:335de553e41edac7:1588a031109594d8

		segment [0000000000000000, 0000000004000000) size   64M

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants