Just start multiple heat-api processes. As many as you want. Then configure keystone with endpoint pointing to your haproxy host:
[root@openstack1 ~]# keystone endpoint-list | grep 8004
| e41899cd971b437182f1be06ed98a129 | DefaultRegion | http://haproxyhost:8004/v1/$(tenant_id)s | http://haproxyhost:8004/v1 | http://haproxyhost:8004/v1/$(tenant_id)s | 7648a4b19fc64cbdb60e23aa42fa369a |
Then point the haproxy to those heat-api processes. Here's one example haproxy config, but there are many valid way of doing it. Tweak the timeouts to your liking.
root@haproxyhost: cat /etc/haproxy/haproxy.cfg
global
daemon
defaults
mode http
log 127.0.0.1:514 local4
maxconn 10000
timeout connect 4s
timeout client 180s
timeout server 180s
option redispatch
retries 3
balance roundrobin
listen heatAPI
bind 0.0.0.0:8004
server heatnode1 heatnode1:8004 check inter 3s rise 2 fall 3
server heatnode2 heatnode2:8004 check inter 3s rise 2 fall 3
or, run multiple heat-api on the some node, under different ports:
listen heatAPI
bind 0.0.0.0:8004
server heatnode1 heatnode1:8004 check inter 3s rise 2 fall 3
server heatnode1 heatnode1:18004 check inter 3s rise 2 fall 3
note, use "mode tcp" instead of "mode http" is also possible, and results in better performance.
↧