Scriptcase Backup Fails On MacOS - Page Reloaded Due To Excessive Memory Use

Hi - Looking for some help on this…

Whenever I run the scriptcase backup (Options > Settings > Backup) it runs for a while then I get a Safari message appearing up the top of the page saying effectively “this page is using a lot of memory. closing it will improve performance…”. After a short while, Safari (or MacOS) forces a page reload and the backup dies.

I can recreate this problem at will - about 8 attempts now. Tried reboots and all. Nothing works - same problem.

Looking at the MacOS Activity monitor, it looks like the process “127.0.0.1:8090” (scriptcase) consumes huge memory during the backup - like 3-4GB!

If it makes any difference, there are around 450 apps across about 8 projects.

I really don’t like the idea of proceeding too far without backups. Not sure how important the scriptcase backup is in the scheme of things - my Mac is backed-up to TimeMachine.

Does anyone know what might be the root cause of the memory consumption, and is there a work-around? Likely bug/memory leak?

Kind regards…

change settings in your PHP.INI
look for lines with MEMORY and FILES

I have been through the php.ini file and made some changes - I increased resources on anything that even vaguely looked appropriate. Most numbers went up by 50%, if not doubled.

Cold boot. Went straight into scriptcase and started a backup. Failed as before. No different. I monitored the memory use of the Scriptcase process (127.0.0.1:8090) in MacOS Activity Monitor as the process progressed and I note that memory use went from ~500MB to well over 4GB before the process effectively failed.

I cannot imagine why a backup process would need to consume so much resource. I remain stuck. I don’t know how important it is to run this process given the Mac, as a whole, is being backed up using Apple Time Machine - is the SC backup redundant? if it is, what is it there for? If not redundant, I remain stuck, without appropriate protection.

Help! Any other ideas?

Hi, I’m using MamPro under MacOS and didn’t notice such behaviors and errors. Perhaps could you try with mamp free version or Xamp for Mac ?

Could you describe your installation ? Os version, MySQl connexion ( I mean connexion via Mysql.sock or via localhost l-ke an local network on your Mac). etc …

for info, here is my basic php.ini

Php.ini

;;;;;;;;;;;;;;;;;;;

; Resource Limits ;

;;;;;;;;;;;;;;;;;;;

max_execution_time = 3600 ; Maximum execution time of each script, in seconds

max_input_time = 3600 ; Maximum amount of time each script may spend parsing request data

memory_limit = 512M ; Maximum amount of memory a script may consume (8MB)

post_max_size = 512M

upload_max_filesize = 512M

max_file_uploads = 50

max_input_vars = 10000

Thank you for your suggestion. I saved my original php.ini, and created a new one with your content only, and restarted apache on MacOS.

No change. Exact same problem.

I am wondering if there is a way to clean up the SC database. I think it is sqlite, so I could try the vacuum command - has anyone does this? Does anyone know where the DB is located? Is there any other “official” way of checking and/or cleaning up the SC database?

FWIW, I just tried to export my project - exactly the same thing happened… “this page is using a lot of memory. closing it will improve performance…” and safari forces a page reload and the export dies.

An update from me on this. Anybody having this problem will likely find they have a lot of version history for their project/s. If you go back and clean up (delete) version history to reduce the number of old project versions you maintain, you may find it will work. That is what I did. I now only maintain the current version I am working on and the two previous versions - backup now works fine.

It seems that the backup process consumes substantial memory as it works through your database. It either does not free any memory as it goes, or there is a massive memory leak in the code. SC/NM either need to fix the memory leak or re-design how this works so that it frees memory as it goes. The current implementation completely fails to serve the needs of enterprise customers with large projects and or version history.

With a work-around identified, I probably will not return to this, but it remains an open item IMO, until the process is fixed.

Regards.