![]() I do remember some people telling me that grub-customizer is dangerous. It is slow compared to yay and the one thing that i hate the most is that it defaults to No when confirming if i want to install a package Why pamac as a manjaro user i don’t even use pamac. Some dialog seems to popup after clicking install and then closes before i can read anything It would seem like the button is not working for a while. I would like to suggest some changes to the new installer.Ĭan you show a loading when the calamare package is being downloaded. vdi files that seems to be causing the problem. 02:48:08 : 'Cannot wipe the filesystem of device /dev/sda: wipefs: error: /dev/sda: probing initialization failed: Device or resource busy' (process.py:190) 02:48:08 File "/usr/share/cnchi/src/installation/wrapper.py", line 49, in wipefsĬall(cmd, msg=err_msg, fatal=fatal) (process.py:190) 02:48:08 File "/usr/share/cnchi/src/installation/auto_partition.py", line 520, in run_mbr 02:48:08 File "/usr/share/cnchi/src/installation/auto_partition.py", line 748, in run 02:48:08 File "/usr/share/cnchi/src/pages/automatic.py", line 254, in run_format 02:48:08 File "/usr/share/cnchi/src/installation/process.py", line 155, in run (process.py:190)ĭuring handling of the above exception, another exception occurred: (process.py:190) 02:48:08 subprocess.CalledProcessError: Command '' returned non-zero exit status 1. Raise CalledProcessError(retcode, process.args, (process.py:190) Return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, (process.py:190) Output = subprocess.check_output( (process.py:190) 02:48:08 File "/usr/share/cnchi/src/misc/run_cmd.py", line 98, in call 02:48:08 Traceback (most recent call last): (process.py:190) 02:48:08 'Cannot wipe the filesystem of device /dev/sda: wipefs: error: /dev/sda: probing initialization failed: Device or resource busy' (process.py:185) 02:48:08 Cannot wipe the filesystem of device /dev/sda: wipefs: error: /dev/sda: probing initialization failed: Device or resource busy (run_cmd.py:121) Gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=984) (run_cmd.py:106) None on /run/credentials/rvice type ramfs (ro,nosuid,nodev,noexec,relatime,mode=700) I’ve had enough of this.Įrror that occured in the last attempt configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime) ![]() I think i’ve tried to install for more than 10 times. The we can use that to proof expanding our capabilities re user configurable data/metadata raid levels down the road.I did try opening the desktop file with sudo but the installer crashed after some time and logged out. And our next move in this area would be to surface within the Web-UI what data and metadata are actually stored in first. Again, all in good time and we are not quite there just yet. This should help to enable the parity raids feasability a little as then one can use say Raid1c3 for metadata and raid6 for data. Plus we hope in time to support the capability to pick both data and metadata raid levels independantly. There is some work going on in that direction but most work seems to be on the 1/10 and there will soon be the btrfs Raid1c2 and Raid1c3 variants that may be interesting. But we will have to see how things pan out for the parity raids. But it’s often easier to fix something that exists than to start over. It’s a little like 2 file systems in one really as the parity raids don’t fully conform to the main remits of btrfs so should probably not have been added. And yes the raid1/10 within btrfs is far more mature, and faster, than the parity raids of 5/6. There is kernel work on-going to improve this type of drive management and that should, in time, make such operations as you did make more sense rather than throwing these resource busy errors.Īll in good time and thanks for reporting your findings. It’s OK when removing a single drive from a pool via resize but when dropping an entire pool a reboot is often required. Btrfs can often fail to ‘drop’ a drive from the ‘busy’ list even when the pool has been deleted. I rebooted RockStor … and … I was able to wipe the drives.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |