Compare commits

...

2 Commits

Author SHA1 Message Date
Michael Hoennig
9bd8c7b54e fixes after code-review 2024-08-01 11:45:37 +02:00
Michael Hoennig
a840337a54 improve rbac-performance-analysis.md again 2024-08-01 11:28:57 +02:00
3 changed files with 28 additions and 5 deletions

View File

@ -291,6 +291,7 @@ The slowest query now was fetching Relations joined with Contact, Anchor-Person
We changed these mappings from `EAGER` (default) to `LAZY` to `@ManyToOne(fetch = FetchType.LAZY)` and got this result: We changed these mappings from `EAGER` (default) to `LAZY` to `@ManyToOne(fetch = FetchType.LAZY)` and got this result:
:::small
| query | calls | total (min) | mean (ms) | | query | calls | total (min) | mean (ms) |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------|-------------|----------| |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------|-------------|----------|
| select hope1_0.uuid,hope1_0.familyname,hope1_0.givenname,hope1_0.persontype,hope1_0.salutation,hope1_0.title,hope1_0.tradename,hope1_0.version from public.hs_office_person_rv hope1_0 where hope1_0.uuid=$1 | 1015 | 4 | 238 | | select hope1_0.uuid,hope1_0.familyname,hope1_0.givenname,hope1_0.persontype,hope1_0.salutation,hope1_0.title,hope1_0.tradename,hope1_0.version from public.hs_office_person_rv hope1_0 where hope1_0.uuid=$1 | 1015 | 4 | 238 |
@ -325,7 +326,28 @@ Keep in mind, it's the same table with the same RBAC-triggers, just a different
Once `EntityManager.persist` was replaced by an explicit SQL INSERT - just for `HsHostingAssetRawEntity`, the total time was down to 17min. Thus importing the UnixUsers and EmailAliases took just 5min, which is an acceptable result. The total import of all HostingAssets is now estimated to about 1 hour (on my developer laptop). Once `EntityManager.persist` was replaced by an explicit SQL INSERT - just for `HsHostingAssetRawEntity`, the total time was down to 17min. Thus importing the UnixUsers and EmailAliases took just 5min, which is an acceptable result. The total import of all HostingAssets is now estimated to about 1 hour (on my developer laptop).
Now, the longest running queries are these:
| No.| calls | total_m | mean_ms | query |
|---:|---------|--------:|--------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1 | 13.093 | 4 | 21 | insert into hs_hosting_asset( uuid, type, bookingitemuuid, parentassetuuid, assignedtoassetuuid, alarmcontactuuid, identifier, caption, config, version) values ( $1, $2, $3, $4, $5, $6, $7, $8, cast($9 as jsonb), $10) |
| 2 | 517 | 4 | 502 | select hore1_0.uuid,hore1_0.anchoruuid,hore1_0.contactuuid,hore1_0.holderuuid,hore1_0.mark,hore1_0.type,hore1_0.version from public.hs_office_relation_rv hore1_0 where hore1_0.uuid=$1 |
| 3 | 13.144 | 4 | 21 | call buildRbacSystemForHsHostingAsset(NEW) |
| 4 | 96.632 | 3 | 2 | call grantRoleToRole(roleUuid, superRoleUuid, superRoleDesc.assumed) |
| 5 | 120.815 | 3 | 2 | select * from isGranted(array[granteeId], grantedId) |
| 6 | 123.740 | 3 | 2 | with recursive grants as ( select descendantUuid, ascendantUuid from RbacGrants where descendantUuid = grantedId union all select "grant".descendantUuid, "grant".ascendantUuid from RbacGrants "grant" inner join grants recur on recur.ascendantUuid = "grant".descendantUuid ) select exists ( select $3 from grants where ascendantUuid = any(granteeIds) ) or grantedId = any(granteeIds) |
| 7 | 497 | 2 | 259 | select hoce1_0.uuid,hoce1_0.caption,hoce1_0.emailaddresses,hoce1_0.phonenumbers,hoce1_0.postaladdress,hoce1_0.version from public.hs_office_contact_rv hoce1_0 where hoce1_0.uuid=$1 |
| 8 | 497 | 2 | 255 | select hope1_0.uuid,hope1_0.familyname,hope1_0.givenname,hope1_0.persontype,hope1_0.salutation,hope1_0.title,hope1_0.tradename,hope1_0.version from public.hs_office_person_rv hope1_0 where hope1_0.uuid=$1 |
| 9 | 13.144 | 1 | 8 | SELECT createRoleWithGrants( hsHostingAssetTENANT(NEW), permissions => array[$7], incomingSuperRoles => array[ hsHostingAssetAGENT(NEW), hsOfficeContactADMIN(newAlarmContact)], outgoingSubRoles => array[ hsBookingItemTENANT(newBookingItem), hsHostingAssetTENANT(newParentAsset)] ) |
| 10 | 13.144 | 1 | 5 | SELECT createRoleWithGrants( hsHostingAssetADMIN(NEW), permissions => array[$7], incomingSuperRoles => array[ hsBookingItemAGENT(newBookingItem), hsHostingAssetAGENT(newParentAsset), hsHostingAssetOWNER(NEW)] ) |
That the `INSERT into hs_hosting_asset` (No. 1) takes up the most time, seems to be normal, and 21ms for each call is also fine.
It seems that the trigger effects (eg. No. 3 and No. 4) are included in the measure for the causing INSERT, otherwise summing up the totals would exceed the actual total time of the whole import. And it was to be expected that building the RBAC rules for new business objects takes most of the time.
In production, the `SELECT ... FROM hs_office_relation_rv` (No. 2) with about 0.5 seconds could still be a problem. But once we apply the improvements from the hosting asset area also to the office area, this should not be a problem for the import anymore.
## Further Options To Explore ## Further Options To Explore
1. Instead of separate SQL INSERT statements, we could try bulk INSERT. 1. Instead of separate SQL INSERT statements, we could try bulk INSERT.

View File

@ -118,8 +118,8 @@ create trigger hs_hosting_asset_type_hierarchy_check_tg
CREATE SEQUENCE IF NOT EXISTS hs_hosting_asset_unixuser_system_id_seq CREATE SEQUENCE IF NOT EXISTS hs_hosting_asset_unixuser_system_id_seq
AS integer AS integer
MINVALUE 100000000 MINVALUE 1000000
MAXVALUE 199999999 MAXVALUE 9999999
NO CYCLE NO CYCLE
OWNED BY NONE; OWNED BY NONE;

View File

@ -109,8 +109,8 @@ class HsUnixUserHostingAssetValidatorUnitTest {
.identifier("abc00-temp") .identifier("abc00-temp")
.caption("some test UnixUser with invalid properties") .caption("some test UnixUser with invalid properties")
.config(ofEntries( .config(ofEntries(
entry("SSD hard quota", 100), entry("SSD hard quota", 60000),
entry("SSD soft quota", 200), entry("SSD soft quota", 70000),
entry("HDD hard quota", 100), entry("HDD hard quota", 100),
entry("HDD soft quota", 200), entry("HDD soft quota", 200),
entry("shell", "/is/invalid"), entry("shell", "/is/invalid"),
@ -126,7 +126,8 @@ class HsUnixUserHostingAssetValidatorUnitTest {
// then // then
assertThat(result).containsExactlyInAnyOrder( assertThat(result).containsExactlyInAnyOrder(
"'UNIX_USER:abc00-temp.config.SSD soft quota' is expected to be at most 100 but is 200", "'UNIX_USER:abc00-temp.config.SSD hard quota' is expected to be at most 51200 but is 60000",
"'UNIX_USER:abc00-temp.config.SSD soft quota' is expected to be at most 60000 but is 70000",
"'UNIX_USER:abc00-temp.config.HDD hard quota' is expected to be at most 0 but is 100", "'UNIX_USER:abc00-temp.config.HDD hard quota' is expected to be at most 0 but is 100",
"'UNIX_USER:abc00-temp.config.HDD soft quota' is expected to be at most 100 but is 200", "'UNIX_USER:abc00-temp.config.HDD soft quota' is expected to be at most 100 but is 200",
"'UNIX_USER:abc00-temp.config.homedir' is readonly but given as '/is/read-only'", "'UNIX_USER:abc00-temp.config.homedir' is readonly but given as '/is/read-only'",