Ask Your Question
1

instance get more than one fixed ip(grizzly-g3)

asked 2013-03-26 01:09:38 -0600

xinxin-shu gravatar image

i have install grizzly-g3, but quantum does not work well, when i boot 128 instances, i found one of instances got more than one fixed ip, howerver, when i boot 64 intances, it nerver happened, besides that , sometimes i can not ping vm with floatingip, i did not find any error message in my quantum log( all the files in the /var/log/quantum), follows are the error output and configurations

| 97a93600-38e2-4700-9851-15ef56c1d628 | slave | ACTIVE | demo-int-net=172.16.100.4 | | 99aeb6b8-4252-4839-a7d1-f87853116100 | slave | ACTIVE | demo-int-net=172.16.100.117 | | 9aa82a35-c9f1-4f44-a108-d14e74eec231 | slave | ACTIVE | demo-int-net=172.16.100.108, 172.16.100.109 | | 9b6b1289-c450-4614-b647-e5ebdffff80a | slave | ACTIVE | demo-int-net=172.16.100.5 | | 9e0d3aa5-0f15-4b24-944a-6d6c3e18ce64 | slave | ACTIVE | demo-int-net=172.16.100.35 | | 9ea62124-9128-43cc-acdd-142f1e7743d6 | slave | ACTIVE | demo-int-net=172.16.100.132 |

my setup : one db host(db service), one glance host(glance service), on api host(keystone,nova-api,nova-scheduler, nova-conductor, quantum-server,quantum-dhcp, quantum-l3-agent,quantum-plugin-openvswitch-agent), eight compute host(each host with nova-compute, quantum-plugin-openvswitch-agent), i check that all the service on all hosts works well

i used vlan type network and openvswitch plugin:

my quantum.conf

[DEFAULT]

Default log level is INFO

verbose and debug has the same result.

One of them will set DEBUG log level output

debug = True

Address to bind the API server

bind_host = 0.0.0.0

Port the bind the API server to

bind_port = 9696

Quantum plugin provider module

core_plugin =

core_plugin = quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2

Advanced service modules

service_plugins =

Paste configuration file

api_paste_config = /etc/quantum/api-paste.ini

The strategy to be used for auth.

Supported values are 'keystone'(default), 'noauth'.

auth_strategy = keystone

Modules of exceptions that are permitted to be recreated

upon receiving exception data from an rpc call.

allowed_rpc_exception_modules = quantum.openstack.common.exception, nova.exception

AMQP exchange to connect to if using RabbitMQ or QPID

control_exchange = quantum

RPC driver. DHCP agents needs it.

notification_driver = quantum.openstack.common.notifier.rpc_notifier

default_notification_level is used to form actual topic name(s) or to set logging level

default_notification_level = INFO

Defined in rpc_notifier, can be comma separated values.

The actual topic names will be %s.%(default_notification_level)s

notification_topics = notifications

[QUOTAS]

resource name(s) that are supported in quota features

quota_items = network,subnet,port

default number of resource allowed per tenant, minus for unlimited

default_quota = -1

number of networks allowed per tenant, and minus means unlimited

quota_network = 10

number of subnets allowed per tenant, and minus means unlimited

quota_subnet = 10

number of ports allowed per tenant, and minus means unlimited

quota_port = 5000 quota_floatingip = 5000

default driver to use for quota checks

quota_driver = quantum.quota.ConfDriver

=========== items for agent management extension =============

Seconds to regard the agent as down.

agent_down_time = 5

=========== end of items for agent management extension =====

[DEFAULT_SERVICETYPE]

Description of the default service type (optional)

description = "default service type"

Enter a service definition line for each advanced service provided

by the default service type.

Each service definition should be in the following format:

<service>:<plugin>[:driver]

[SECURITYGROUP]

If set to true this allows quantum to receive proxied security group calls from nova

proxy_mode = False

[AGENT] root_helper = sudo quantum-rootwrap /etc/quantum ... (more)

edit retag flag offensive close merge delete

31 answers

Sort by » oldest newest most voted
0

answered 2013-03-26 04:31:13 -0600

during the portion of the log where you are deleting vms, i see two quantum exceptions, but they are "port not found" errors, that seem correlated to nova instance not found errors, and aren't the port-ids mentioned above, so I doubt it is related.

2013-03-26 03:58:04.965 ERROR nova.network.quantumv2.api [req-b6fe4986-507d-4735-a504-adbf4386b5bc c9ae6c830b3b4d40925e5fe23f671893 ebacbc9c99f84607920b2ac749608623] Failed to delete quantum port 03347623-a4c8-473f-bde6-845d19c0009c 2013-03-26 03:58:04.965 1290 TRACE nova.network.quantumv2.api Traceback (most recent call last): 2013-03-26 03:58:04.965 1290 TRACE nova.network.quantumv2.api File "/usr/lib/python2.7/dist-packages/nova/network/quantumv2/api.py", line 309, in deallocate_for_instance 2013-03-26 03:58:04.965 1290 TRACE nova.network.quantumv2.api quantumv2.get_client(context).delete_port(port['id']) 2013-03-26 03:58:04.965 1290 TRACE nova.network.quantumv2.api File "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line 102, in with_params 2013-03-26 03:58:04.965 1290 TRACE nova.network.quantumv2.api ret = self.function(instance, args, *kwargs) 2013-03-26 03:58:04.965 1290 TRACE nova.network.quantumv2.api File "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line 236, in delete_port 2013-03-26 03:58:04.965 1290 TRACE nova.network.quantumv2.api return self.delete(self.port_path % (port)) 2013-03-26 03:58:04.965 1290 TRACE nova.network.quantumv2.api File "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line 521, in delete 2013-03-26 03:58:04.965 1290 TRACE nova.network.quantumv2.api headers=headers, params=params) 2013-03-26 03:58:04.965 1290 TRACE nova.network.quantumv2.api File "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line 510, in retry_request 2013-03-26 03:58:04.965 1290 TRACE nova.network.quantumv2.api headers=headers, params=params) 2013-03-26 03:58:04.965 1290 TRACE nova.network.quantumv2.api File "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line 455, in do_request 2013-03-26 03:58:04.965 1290 TRACE nova.network.quantumv2.api self._handle_fault_response(status_code, replybody) 2013-03-26 03:58:04.965 1290 TRACE nova.network.quantumv2.api File "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line 436, in _handle_fault_response 2013-03-26 03:58:04.965 1290 TRACE nova.network.quantumv2.api exception_handler_v20(status_code, des_error_body) 2013-03-26 03:58:04.965 1290 TRACE nova.network.quantumv2.api File "/usr/lib/python2.7/dist-packages/quantumclient/v2_0/client.py", line 75, in exception_handler_v20 2013-03-26 03:58:04.965 1290 TRACE nova.network.quantumv2.api raise exceptions.QuantumClientException(message=error_dict) 2013-03-26 03:58:04.965 1290 TRACE nova.network.quantumv2.api QuantumClientException: Port 03347623-a4c8-473f-bde6-845d19c0009c could not be found on network None 2013-03-26 03:58:04.965 1290 TRACE nova.network.quantumv2.api

edit flag offensive delete link more
0

answered 2013-03-26 03:06:40 -0600

To me, this says output says that 108 and 109 are actually on different ports.

Can you do a quantum show-port <port-uuid> for both fo the ports below? In particular, I want to confirm that they have the same device_id.

From the looks of it, this looks to be an issue in Nova, as quantum itself has not actually allocated multiple ips to a port. Rather, pending the output i've requested above, it seems like nova created two ports for an instance, rather than one.

Looking through your logs, there are an extremely high number of exceptions and tracebacks in your nova log. I wonder if any of those failures caused an instance to be respawned, without the quantum port for the initial creation being cleaned up.

Unfortunately, the nova-compute log seems filled with tracebacks unrelated to quantum, so its hard to isolate a possible cause of the issue you're seeing above. It seems like you have some database schema sync issues at the least. I'd suggest cleaning up these issues and seeing if you can still repro this.

Dan

On Mon, Mar 25, 2013 at 7:31 PM, shu, xinxin < question225158@answers.launchpad.net > wrote:

Question #225158 on quantum changed: https://answers.launchpad.net/quantum/+question/225158 (https://answers.launchpad.net/quantum...)

shu, xinxin posted a new comment: i list ports of the instance which has two fixed ips, below is output:

+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+ | id | name | mac_address | fixed_ips |

+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+ | 1b93218d-2eb2-4064-b70c-d6562724fc47 | | fa:16:3e:10:7d:8c | {"subnet_id": "24be33c7-64f9-4695-811d-33eeb4ad75da", "ip_address": "172.16.100.108"} | | fc44d25d-574e-4c09-a6fa-74cd522b0e3c | | fa:16:3e:66:e2:60 | {"subnet_id": "24be33c7-64f9-4695-811d-33eeb4ad75da", "ip_address": "172.16.100.109"} |

+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+

i list all networks interfaces in this vm, only find an interface with fixed ip 172.16.100.108

here is the link of nova-compute log,"http://paste.openstack.org/show/34579/"

the id of this instance is 9aa82a35-c9f1-4f44-a108-d14e74eec231


You received this question notification because you are an answer contact for quantum.


~~~~~~~~~~~~~~~~~~~~~~~~~~~ Dan Wendlandt Nicira, Inc: http://www.nicira.com twitter: danwendlandt ~~~~~~~~~~~~~~~~~~~~~~~~~~~

edit flag offensive delete link more
0

answered 2013-03-26 02:27:12 -0600

xinxin-shu gravatar image

i list ports of the instance which has two fixed ips, below is output:

+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+ | id | name | mac_address | fixed_ips | +--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+ | 1b93218d-2eb2-4064-b70c-d6562724fc47 | | fa:16:3e:10:7d:8c | {"subnet_id": "24be33c7-64f9-4695-811d-33eeb4ad75da", "ip_address": "172.16.100.108"} | | fc44d25d-574e-4c09-a6fa-74cd522b0e3c | | fa:16:3e:66:e2:60 | {"subnet_id": "24be33c7-64f9-4695-811d-33eeb4ad75da", "ip_address": "172.16.100.109"} | +--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+

i list all networks interfaces in this vm, only find an interface with fixed ip 172.16.100.108

here is the link of nova-compute log,"http://paste.openstack.org/show/34579/"

the id of this instance is 9aa82a35-c9f1-4f44-a108-d14e74eec231

edit flag offensive delete link more
0

answered 2013-03-26 01:29:34 -0600

the multiple IPs for a VM seems like a bug, assuming you made the same request for all VMs (which it sounds like you did).

Can you provide the output of quantum port-list (to confirm that there is also one port that has multiple IPs, not that nova is somehow associating multiple ports with the instance).

If there is a port with two IPs, can you try to reproduce this with verbose=True and debug=True, and post the log somewhere?

If not, then I'd be interesting in seeing the nova-compute log for the host running the VM with multiple IPs. Its possible that something strange is happening on the nova side to result in multiple ports being associated with a single VM.

edit flag offensive delete link more
0

answered 2013-03-28 17:45:08 -0600

Unfortunately we have not been able to reproduce this in any of the runs since yesterday. Will keep monitoring.

edit flag offensive delete link more
0

answered 2013-04-12 04:21:37 -0600

Interesting, your nova-compute log shows there was a timeout. Vish was saying that he was thinking perhaps the port was already created (a timeout) vm rescheduled on another HV -- another port created.

edit flag offensive delete link more
0

answered 2013-04-12 04:21:34 -0600

Interesting, your nova-compute log shows there was a timeout. Vish was saying that he was thinking perhaps the port was already created (a timeout) vm rescheduled on another HV -- another port created.

edit flag offensive delete link more
0

answered 2013-04-12 04:08:38 -0600

Nova-compute log: http://paste.openstack.org/show/35783/

Quantum log: http://paste.openstack.org/show/35880/

Instance and port details via CLI: http://paste.openstack.org/show/35881/

edit flag offensive delete link more
0

answered 2013-03-27 06:57:44 -0600

xinxin-shu gravatar image

sorry for my late response, i used --nic network-id, but not used --nic port-id. i can see all vm with at least one port, i have reproduced this error for quantum server log after i cleaned all nova-compute.log on compute hosts and quantum logs on api host, this time, there are four instances that have two fixed ips, each instance has two ports which hae the same device ids, and in the libvirt xml, there are two tap interfaces for these intances. if i log into this vm and execute "ifconfig -a", it will output two nics(one is up with fixed ip, the other is down), i will take one instance as an example. below is detail information.

output of nova list:

| 16599b42-190e-473f-8586-d40746484ce7 | slave | ACTIVE | demo-int-net=172.16.100.21 | | 18298dc0-3042-42ee-80c6-08483a582712 | slave | ACTIVE | demo-int-net=172.16.100.117, 172.16.100.123 | | 19bbe682-45df-4ebf-972e-617cfab76c82 | slave | ACTIVE | demo-int-net=172.16.100.17

port list of this instance:

+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+ | id | name | mac_address | fixed_ips | +--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+ | 222d558f-bb03-4295-919b-9081150ada0c | | fa:16:3e:1d:d1:60 | {"subnet_id": "3a8f62c8-a5ce-4da3-b5a2-1ca612496fc1", "ip_address": "172.16.100.117"} | | c0002dbb-50bf-48dc-8755-ba6b8a011452 | | fa:16:3e:a1:95:46 | {"subnet_id": "3a8f62c8-a5ce-4da3-b5a2-1ca612496fc1", "ip_address": "172.16.100.123"} | +--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+

status of each port

+----------------------+---------------------------------------------------------------------------------------+ | Field | Value | +----------------------+---------------------------------------------------------------------------------------+ | admin_state_up | True | | binding:capabilities | {"port_filter": true} | | binding:vif_type | ovs | | device_id | 18298dc0-3042-42ee-80c6-08483a582712 | | device_owner | compute:None | | fixed_ips | {"subnet_id": "3a8f62c8-a5ce-4da3-b5a2-1ca612496fc1", "ip_address": "172.16.100.117"} | | id | 222d558f-bb03-4295-919b-9081150ada0c | | mac_address | fa:16:3e:1d:d1:60 | | name | | | network_id | 4a798345-561f-4c9e-a5c4-aac87f1daf26 | | security_groups | 49a820ec-cf06-4316-bf4b-c1f20902848f | | status | ACTIVE | | tenant_id | ebacbc9c99f84607920b2ac749608623 | +----------------------+---------------------------------------------------------------------------------------+

+----------------------+---------------------------------------------------------------------------------------+ | Field | Value | +----------------------+---------------------------------------------------------------------------------------+ | admin_state_up | True | | binding:capabilities | {"port_filter": true} | | binding:vif_type | ovs | | device_id | 18298dc0-3042-42ee-80c6-08483a582712 | | device_owner | compute:None | | fixed_ips | {"subnet_id": "3a8f62c8-a5ce-4da3-b5a2-1ca612496fc1", "ip_address": "172.16.100.123"} | | id | c0002dbb-50bf-48dc-8755-ba6b8a011452 | | mac_address | fa:16:3e:a1:95:46 | | name | | | network_id | 4a798345-561f-4c9e-a5c4-aac87f1daf26 | | security_groups | 49a820ec-cf06-4316-bf4b-c1f20902848f | | status | ACTIVE | | tenant_id | ebacbc9c99f84607920b2ac749608623 | +----------------------+---------------------------------------------------------------------------------------+

libvirt xml( two tap interfaces)

<domain type="kvm"> <name>instance-0000008f</name> <uuid>18298dc0-3042-42ee-80c6-08483a582712</uuid> <memory unit="KiB">4194304</memory> <currentmemory unit="KiB">4194304</currentmemory> <vcpu placement="static">2</vcpu> <sysinfo type="smbios"> <system> <entry name="manufacturer">OpenStack Foundation</entry> <entry name="product">OpenStack Nova</entry> <entry name="version">2013.1</entry> <entry name="serial">db7cce3d-4e73-11e0-bf2c-001e67059c8c</entry> <entry name="uuid">18298dc0-3042-42ee-80c6-08483a582712</entry> </system> </sysinfo> <os> <type arch="x86_64" machine="pc-1.2">hvm</type> <boot dev="hd"/> <smbios mode="sysinfo"/> </os> <features> <acpi/> <apic/> </features> <cpu mode="host-model"> <model fallback="allow"/> </cpu> <clock offset="utc"> <timer name="pit" tickpolicy="delay"/> <timer name="rtc" tickpolicy="catchup"/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> <disk type="file" device="disk"> <driver name="qemu" type="qcow2" cache="none"/> <source file="/var/lib/nova/instances/18298dc0-3042-42ee-80c6-08483a582712/disk"/> <target dev="vda" bus="virtio"/>

</disk> <controller type="usb" index="0">
</controller> <interface type="bridge"> <mac address="fa:16:3e:1d:d1:60"/> <source bridge="br-int"/> <virtualport type="openvswitch"> <parameters interfaceid="222d558f-bb03-4295-919b-9081150ada0c"/> </virtualport> <target dev="tap222d558f-bb"/> <model type="virtio"/>
</interface> <interface type="bridge"> <mac address="fa:16:3e:a1:95:46"/> <source bridge="br-int"/> <virtualport type="openvswitch"> <parameters interfaceid="c0002dbb-50bf-48dc-8755-ba6b8a011452"/> </virtualport> <target dev="tapc0002dbb-50"/> <model type="virtio"/>
</interface> <serial type="file"> <source path="/var/lib/nova/instances/18298dc0-3042-42ee-80c6-08483a582712/console.log"/> <target port="0"/> </serial ...
(more)
edit flag offensive delete link more
0

answered 2013-03-26 17:51:34 -0600

And, we were using the --nic network-id option.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

[hide preview]

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2013-03-26 01:09:38 -0600

Seen: 232 times

Last updated: Jul 03 '13