Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

Key Points

gcp services match apache open-source projects



References


Key Concepts



Potential Value Opportunities



Potential Challenges



Tracking Costs of File Serving vs Larger App Sizes


wes [8:19 AM]
@jake @piotr.s.brainhub @jvila

Give me your thoughts on this:

Recently (last week) I made updates to both inventory and vehicle-info services related to the local cache for decoded vehicles.
We had a file that was previously hosted in Google Cloud and it was moved directly to the repo so the files could be deployed with the app directly.
While I’d prefer to have the file hosted, the costs of bandwidth transfer from Google Cloud skyrocketed and we had no choice but to change that to stop the bleeding of costs there temporarily. I will be evaluating other storage options that we can pull down a 350-400MB file upon start of every instance of inventory and vehicle-info, but we have about 30 instances across both of those that run… and that is in every space in every region. So, 350MB file being downloaded 90 times when we deploy those two services with any PR merge, etc… and that’s just D1.

So - I moved the files as split, compressed files to the repo and deploy with the app.
Works great, runs fast… but we now have that storage in the app.
I did not increase the disk_quota for these services when that change was made.
We currently have the default disk_quota for file storage of every container set… which is 1GB.

Running into heap allocation issues
This may be due to the fact that disk_quota was not increased after moving the gzipped decoded vehicle files to the service.
They are about 350MB in compressed size, but the process to uncompress the files with piped streams may be placing too much burden on the existing 1GB of disk_quota (the uncompressed size of the files are ~2.5GB).

The disk storage seems to be okay with just the deployed files (570MB of the 1GB).
However, the 350MB that is the sum of the aggregated split files is just the compressed size.
When expanded, it is about 2.5GB.

I am not actually expanding the file and saving it to the disk.
It is working on a piped stream that has the uncompressed data being piped.

Regardless, I’m seeing heap allocation issues in vehicle-info. It also seems to be happening during the startup script which is when the cache is built from the compressed files.

I am going to increase the size of the disk_quota to see if that gives any breathing room for virtual memory to solve the heap allocation issue I’m seeing, but this is really just my first guess.

Any thoughts?
@piotr.s.brainhub haven’t we had these changes running for a week already without issue? That’s what confusing if this is actually related to the issue i’m describing above. (edited)


Candidate Solutions



Step-by-step guide for Example



sample code block

sample code block
 



Recommended Next Steps



  • No labels