>

Steps:

1. Clone HD file “.vdi”

1). Show VDI information about the current disk.

$ vboxmanage showhdinfo "D:\Ubuntu_clang.vdi"

2). Clone the old VDI file to a new VDI file. (Syntax: $ vboxmanage clonehd src dest)

$ vboxmanage clonehd D:\Ubuntu_clang.vdi D:\tmp\outputfile.vdi

Show VDI information about the cloned disk.

# vboxmanage showhdinfo path\inputfile_clone.vdi

3). Resizing the cloned disk.

$ vboxmanage modifyhd D:\Ubuntu_clang.vdi --resize 51200

Show VDI information about the resized clone disk.

$ vboxmanage.exe showhdinfo path\inputfile_clone.vdi

You can either delete the old “fixed” file, or leave it as a backup. Make sure you test the new VDI file before you delete the original one.

Note:

  • You will not immediately see the new size of the cloned disk.
  • You have to boot the VM with it, and then you have to use your partition management tool to expand your partition to fill the virtual disk (or create more partitions).
  • For Windows host, just run diskmgmt.msc and you’ll be able to expand the partition in there.

2. Partition

Use GParted
https://www.youtube.com/watch?v=ikSIDI535L0&feature=iv&src_vid=r_UyKufXR3c&annotation_id=annotation_4072520487

If you dont’ have a live CD because you are using VBox, just search for Ubuntu Live CD Gparted, download the iso file and use it as your optical - remember to check your boot order.
Follow this instructions

Move .vdi file to another place or directory

(1) GUI way

Below method doesn’t require removing your virtual machines and mucking up their settings.
1) Copy your Virtualbox VMs folder to a new drive.
2) Run the Virtual Box Machine Manager. Run the media Manager File -> Virtual Media Manager
2a. Right click VM, choose the Release button and then click the Remove button.
On the next dialog, you can either remove or keep the virtual drive.
Close the manager leaving you in your Virtualbox Machine Manager.
3) Move your VM(.vdi file) to a new location, say D:\tmp\vm_moved.vid
4) In virtualbox, select the vm that you just moved, click the Settings button,
Click the Storage section. Add a controller for the media (SATA usually)
and then add a hard drive and choose existing disk and select the .vdi file at your new location.
Repeat for each machine you’re moving
4) Fire off your virtual machine at the new location to check.
Next time you visit the Virtual Media Manager, hovering over the VM entry
will show you where the VD is stored.
Make sure you change your snapshots folders to point to the new drive if you’re using them.
Each machine has a snapshot folder setting and the VM Manager has a Default Machine folder setting
in File -> Settings that needs to be changed as well.

In addition I had to also modify path in xml file. After that it worked flawlessly.
NOTE: Things have changed a bit since this was written, see Rob’s answer. It’s extremely simple now.

(2) commandline way

1) List existing VMs via VBoxManage list vms. 2) Next to the names of the VMs, inside curly brackets, their UUIDs are referenced.
Copy the one of interest. Details about it can be retrieved via VBoxManage showvminfo UUID. 3) Unregister the VM of interest via VBoxManage unregistervm UUID.
4) Move the directory of the VM of interest.
5) Finally, register the machine via VBoxManage register NameOfVM.vbox – obviously,
where NameOfVM is meant the actual name of the VMs .vbox file to be registered.

For those who are unsure about the exact procedure, the command which clones an entire machine (including snapshots) is:

$ VBoxManage clonevm --mode all --name "Cloned_VM" --basefolder /new/path/ --register Cloned_VM

If necessary, quote path and name.

Finally, check if all is well and remove the original.
That’s all, really. No need to bother with xml files or a hex editor. No need to use a GUI, either.

While there is no way to actually switch a VDI between fixed-size and dynamic,
you can clone the existing VDI into a new one with different settings with VBoxManage.

VBoxManage clonehd [old-VDI] [new-VDI] --variant Standard
VBoxManage clonehd [old-VDI] [new-VDI] --variant Fixed

If you want to expand the capacity of a VDI, you can do so with

VBoxManage modifyhd [VDI] --resize [megabytes]

Resizing only works for dynamic VDI images. However, you can combine the resize information with the conversion information to expand fixed-size VDIs. (E.g., convert a fixed-size image to dynamic, expand it, and then convert the dynamic image back to a fixed-size image.)

If you want to compact the image as much a possibly, be sure to zero out the free space. This can be done in Linux by using the dd command to write endless zeros to a file and then deleting that file. (With the caveat of the reserved space of EXT and other file systems.)

NOTE:
variant option specifies what type of the new .vdi file will be. by default, it is Standard, this is a dynamic format.

Here’s a fairly simple process that worked for me to resize a VirtualBox (v. 4.3.16) fixed size disk
to a 60GB dynamic disk on my Mac (OS X 10.9.4) with Linux (Ubuntu 14.04) running as the guest OS:

In shell on MAC in directory with vdi:

(1) clone fixed-size vdi to a dynamic-size vdi
VBoxManage showhdinfo mydisk.vdi
VBoxManage clonehd mydisk.vdi mydisk_clone.vdi –format vdi –variant Standard
VBoxManage modifyhd mydisk_clone.vdi –resize 61440
VBoxManage showhdinfo mydisk_clone.vdi

(2) format newly enlarged partition
now boot a live cd ubuntu
open gparted(gparted is usually installed in the Live-CD by defaul) Ubuntu 15.10 Desktop
delete linux-swap extended partition
if it has a lock sign, then right click swap off to unlock so that we can delete the extended partition
enlarge the origial parition, leave some space for linux-swap extended partition we will create later
right click New partition to create linux-swap extended partition
apply all changes, turn down the Live-CD
boot into the new enlarged vm
df -h to check the resulting disk space

(3) clone dynamic-size vdi back to fixed-size vdi
VBoxManage clonehd mydisk_clone.vdi mydisk_clone_fixed.vdi –format vdi –variant Fixed

(4) 上一步完成之后就是已经是fixed 分区了

启动ubuntu系统的话会以下 Warning 信息:
A start job is running for dev-disk-by\x2duuid-db1d2b48\x2d0cfb\x2d4bdf\x2dabe8\x2db86b9b08dff9.device(1min 22s / 1min 30s)
持续1分30秒的时间,导致启动速度非常慢

网卡没启动成功:

A start job is running for Wait on all “auto” /etc/network/interfaces to be up for network-online.target

这个错误的意思是说,等待所有在 /etc/network/interfaces 中配置的网卡 interface 启动,
但是如果没有这块网卡(比如没有 enp0s8 (Host-Only),这种情况通常发生在clone一个系统时,
clone出的vm 的网卡的配置跟原vm 的网卡配置不同,所以就导致了网卡不同,但/etc/network/interface又相同)
从而在 /etc/network/interfaces 中配置了一块没有的网卡,那么就会出现这个错误。

这个错误直接就会导致服务 ifup-wait-all-auto.service 失败。

解决方案:

ifconfig -a 或者ifquery –list 命令找出已有的的网卡,手工修改 /etc/network/interfaces, 把没有的网卡配置信息删除掉。
修改完成之后, 重启网络服务,service networking restart 或者 sudo /etc/init.d/networking restart 或者 systemctl restart networking

以下是 debug 过程:

$ systemctl list-units –type=service
UNIT LOAD ACTIVE SUB Description
ifup-wait-all-auto.service loaded failed failed Wait for all “auto” /etc/network/interfaces to be up for network-online.target

$ systemctl list-unit-files | grep network

$ systemctl –all | grep ifup
● ifup-wait-all-auto.service loaded failed failed Wait for all “auto” /etc/network/interfaces to be up for network-online.target
ifup@enp0s17.service loaded active exited ifup for enp0s17
ifup@enp0s8.service loaded active exited ifup for enp0s8
system-ifup.slice loaded active active system-ifup.slice

$ sudo find / -name “ifup@enp0s17.service“ /sys/fs/cgroup/systemd/system.slice/system-ifup.slice/ifup@enp0s17.service
$ ls /sys/fs/cgroup/systemd/system.slice/system-ifup.slice/ifup@enp0s17.service
cgroup.clone_children cgroup.procs notify_on_release tasks

$ sudo find / -name “network-online.target”
/lib/systemd/system/network-online.target

$ systemctl list-dependencies network-online.target
network-online.target
● ├─ifup-wait-all-auto.service
● └─NetworkManager-wait-online.service

$ sudo find / -name “ifup-wait-all-auto.service”
/lib/systemd/system/network-online.target.wants/ifup-wait-all-auto.service
/lib/systemd/system/ifup-wait-all-auto.service
/sys/fs/cgroup/devices/system.slice/ifup-wait-all-auto.service
/sys/fs/cgroup/systemd/system.slice/ifup-wait-all-auto.service

$ sudo vim /lib/systemd/system/ifup-wait-all-auto.service
[Unit]
Description=Wait for all “auto” /etc/network/interfaces to be up for network-online.target
Documentation=man:interfaces(5) man:ifup(8)
DefaultDependencies=no
After=local-fs.target
Before=network-online.target
[Service]
Type=oneshot
RemainAfterExit=yes
TimeoutStartSec=2min 改成 5sec
ExecStart=/bin/sh -ec ‘
for i in $(ifquery –list –exclude lo –allow auto); do INTERFACES=”$INTERFACES$i “; done;
[ -n “$INTERFACES” ] || exit 0;
while ! ifquery –state $INTERFACES >/dev/null; do sleep 1; done;
for i in $INTERFACES; do while [ -e /run/network/ifup-$i.pid ]; do sleep 0.2; done; done’

use systemd-analyze blame to see if there is a service that takes a long time to start.

By masking the specific service can reduce getting the startup time down to mere seconds:

systemctl mask ifup-wait-all-auto.service

List all network interfaces:
$ ifquery –list

原因:
输入下列命令查看(链接:https://donnutcompute.wordpress.com/2014/04/19/a-start-job-is-running-for-dev-disk-by/) $ journalctl -b
Feb 01 12:28:36 ub1510alx systemd[679]: Startup finished in 74ms.
Feb 01 12:28:36 ub1510alx systemd[1]: Started User Manager for UID 1000.
Feb 01 12:28:39 ub1510alx systemd[1]: dev-disk-by\x2duuid-bc922c81\x2d9c9f\x2d4461\x2dadcf\x2db86b9b08dff9.device: Job dev-disk-by\x2duuid-bc922c81\x2d9c9f\x2d4461\x2da
Feb 01 12:28:39 ub1510alx systemd[1]: Timed out waiting for device dev-disk-by\x2duuid-bc922c81\x2d9c9f\x2d4461\x2dadcf\x2db86b9b08dff9.device.
Feb 01 12:28:39 ub1510alx systemd[1]: Dependency failed for /dev/disk/by-uuid/bc922c81-9c9f-4461-adcf-b86b9b08dff9.
Feb 01 12:28:39 ub1510alx systemd[1]: dev-disk-by\x2duuid-bc922c81\x2d9c9f\x2d4461\x2dadcf\x2db86b9b08dff9.swap: Job dev-disk-by\x2duuid-bc922c81\x2d9c9f\x2d4461\x2dadc
Feb 01 12:28:39 ub1510alx systemd[1]: Startup finished in 9.433s (kernel) + 3min 2.537s (userspace) = 3min 11.971s.
Feb 01 12:28:39 ub1510alx systemd[1]: dev-disk-by\x2duuid-bc922c81\x2d9c9f\x2d4461\x2dadcf\x2db86b9b08dff9.device: Job dev-disk-by\x2duuid-bc922c81\x2d9c9f\x2d4461\x2d

什么是 swap status?
$ cat /proc/swaps
Filename Type Size Used Priority
There is nothing there!

Is fstab right?
$ cat /etc/fstab

/etc/fstab: static file system information.

#

Use ‘blkid’ to print the universally unique identifier for a

device; this may be used with UUID= as a more robust way to name devices

that works even if disks are added and removed. See fstab(5).

#

/ was on /dev/sda1 during installation

UUID=96f31591-dcd7-4e89-b50d-b8a62a4776db / ext4 errors=remount-ro 0 1

swap was on /dev/sda5 during installation

UUID=bc922c81-9c9f-4461-adcf-b86b9b08dff9 none swap sw 0 0

What is Swap UUID?
$ lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT
sda
├─sda1 ext4 96f31591-dcd7-4e89-b50d-b8a62a4776db /
├─sda2
└─sda5 swap 04080eae-5988-4f53-a362-796c3408dd08
sr0 iso9660 VBOXADDITIONS_5.0.12_104815 2015-12-18-16-03-06-00

这里显示的分区后的各个分区的真实UUID,显然fstab表里的Swap UUID还是原来删除的那个swap分区的UUID,
显然是错的。Swap UUID in fstab is incorrect!

分析:删除老的swap分区之后,新建的swap分区的UUID没有更新到fstab表里,fstab表里存的还是之前的老的swap分区的UUID,
所以导致了启动的时候花很多时间去检查

解决办法:
Solution
更新fstab表里的Swap分区的UUID(即/dev/sda5 swap下的UUID)
$ vim /etc/fstab

把 UUID 改成 lsblk -f 里显示的正确的

/dev/sda5

UUID=da7fa69a8-31bd-448c-98de-55de4aa6f06a none swap defaults 0 0

References
印象笔记: https://app.yinxiang.com/Home.action#n=01862315-0296-4ebf-8360-b1ca3241ec18&b=6ac15670-a7da-4bb7-b91d-9bdedaf2583e&ses=4&sh=1&sds=5&
http://boluns.blog.163.com/blog/static/69845968201411691347597/
https://cnzhx.net/blog/resizing-lvm-centos-virtualbox-guest/
http://www.cnblogs.com/codemood/p/3142848.html