Tips and Tricks with MS SQL (Part 3)

Tips and Tricks with MS SQL (Part 3)

Dec 6, 2019 by Sam Taylor

Edit Change Database Auto-Growth from Percent-Based to Fixed-Size Growth​

  • Edit
 
 

In the ideal world all the Microsoft SQL Servers I came across would have their databases pre-grown to account for future growth and re-evaluate their needs periodically. Unfortunately, this is almost never the case. Instead, these databases rely on SQL’s autogrowth feature to expand their data files and log files as needed. The problem is the default is set to autogrow data files by 1MB(*) and log files by 10%(*).

Since this was such a big issue with performance, Microsoft made some changes in SQL Server 2016 onward, where the data files and log files will default to a growth of 64MB each. If your server is still using the 1MB autogrowth for data and 10% autogrowth for logs, consider using Microsoft’s new defaults and bump it up to at least 64MB.

Growing a data file by 1MB increments means the server must do extra work. If it needs to grow 100MB – it must send over 100 requests to the server to grow 1MB, add data, then ask the server to grow again and repeat. Imagine how bad this gets for databases growing by gigabytes a day! This is even worse if growing by a percentage. This means the server has to do some computing first before it can grow. Growing 10% of 100MB is easy to account for but as the log file grows it can quickly get out of hand and runaway bloating your storage system while adding CPU overhead as an extra kick in the rear!

The change is luckily very simple. Right-click one of the user databases using SQL Server Management Studio and select “Properties”. From there click on the “Files” page. Next expand the “…” button near the “ROWS” cell and change this to 64MB or greater (depending on how much room you have to work with and growth expected). Do the same for the “LOG” file type. That’s it! You’re done and gave your server some well needed breathing room!

Any questions, comments, or feedback are appreciated! Feel free to reach out to aturika@newtheme.jlizardo.com for any SQL Server questions you might have!

Tips and Tricks with MS SQL (Part 2)

Dec 6, 2019 by Sam Taylor

Database Compatibility Levels Left Behind Post-Upgrades & Migrations

What’s common with almost every Microsoft SQL Server I come across that’s recently been upgraded or migrated to? The user database compatibility levels are still stuck in the past on older SQL versions. The compatibility level remains on version of SQL the database was created on. This could be several versions back or a mixed bag of databases, all on different versions. When Microsoft SQL is upgraded or databases are migrated to newer versions, the compatibility levels don’t update. It must be done manually. It’s important to update those databases to the most recent version to take advantage of all the newer version’s features. Good news is it’s very simple to change and only take a minute.

Changing the compatibility level upwards doesn’t really hold any risks unless there’s linked servers involved that run on much older versions of SQL. Even then, it’s usually relatively safe change. If you’re unsure, check with your DBA or reach out to me for questions. All you need to do is right-click the database is SQL Server Management Studio, select “Properties”, choose “Options”, and update the drop-down selector for “Compatibility Level” to your current version of SQL Server. It’s important you don’t forget to update these settings after migrating or upgrading to a newer version of MS SQL Server.

Any questions, comments, or feedback are appreciated! Feel free to reach out to aturika@newtheme.jlizardo.com for any SQL Server questions you might have! 

Tips and Tricks with MS SQL (Part 1)

Dec 6, 2019 by Sam Taylor

Change your Power Plan

By default, Windows chooses “Balanced” as the recommended Power Plan on a new Windows Server deployment. It’s an option you should change and one that’s most often overlooked in my experience. Production SQL servers usually aren’t being powered by mobile laptops with batteries, so we’ll need to use an option that gives SQL more breathing room. The goal is to make sure the server is always on-the-ready and not sacrificing processes or services for the sake of fairly minimal reduction in power consumption.

Instead of “Balanced”, choose the “High Performance” mode. Your SQL Server will thank you. This is easily done by going to the Control Panel, clicking on “Power Options” and picking the power plan more optimized to run SQL Server. Those that are savvy could easily update the changes to all their SQL servers at once by using a Group Policy.

Any questions, comments, or feedback are appreciated! Feel free to reach out to aturika@newtheme.jlizardo.com for any SQL Server questions you might have!   

Plants with Sam: Spider Plants

Jul 16, 2019 by Sam Taylor

Hi, it’s Sam with the next segment of Plants with Sam! If you’re a little late to the plant party, my first post with more details about this blog series and why I’m doing it can be found here. 

 

Today I would like to talk about the spider plant. These plants are pretty popular for two reasons: they’re super easy to take care of, and given the right conditions, they produce babies like crazy!

 

I got my own spider plant from a friend, and it was a baby from one of her main plants. The way this works is if the conditions are right (lots of light and warm temperatures, as well as a snug pot), they will send out a shoot from the middle of the plant and at the end of that shoot, a miniature spider plant will grow. It’s best to wait until the tiny spider plant is starting to grow its own roots, then you can just pinch it off and stick it in the soil!

Here are some tips and tricks that will help you care for your spider plant:

Light

Spider plants prefer nice, bright light, but they will also be alright in lower light conditions.

Water

They don’t need water too often, just about every other week or so. Let the soil dry out completely in between waterings. If your water has a lot of salts or minerals in it, it would be best to use distilled water or rainwater.

Soil

Most soils that drain quickly work fine for these plants. It’s best to use soils that don’t have a lot of fertilizer in them.

Temperature

Spider plants like it a bit on the warmer side, so it’s best to keep the temperature between 70 and 90 degrees Fahrenheit. They will survive in temperatures as low as 35 degrees, but they will not grow much if the temperature is under 65.

Fertilizer

Less is more when it comes to fertilizer for spider plants. Use a diluted houseplant fertilizer in spring and summer.

If you follow these tips, your spider plant will have no issues being happy and healthy! Don’t forget to stay tuned for more plant care tips!

My Experience at CrossRealms

Jul 16, 2019 by Sam Taylor

Hi, my name is Dayoung KO, I’m from Korea and I am working as a marketing intern at CrossRealms. I am studying abroad here at IIT, and this internship is a part of my schooling. Since the internship class that I’m in is only scheduled for 6 weeks, this will be my last week here at CrossRealms.

 

Thinking back over the past few weeks at CrossRealms, I can describe the experience in a few words: “I’ve learned a lot.” I remember the very first day I met Usama, the president of CrossRealms. He told me that he wants me to learn as much as I can from here as well as share my culture with my co-workers who are from different cultures. As a result, I learned not only about the marketing field I’ve been working in, but I’ve also learned about building personal relationships with coworkers with different cultural backgrounds.

 

One thing I enjoyed about CrossRealms, is that every employee is free to speak their mind. Regardless of what kind of work I’ve done, I participate in almost every meeting for marketing. My co-workers always asked my opinion and had me speak out during the meeting. I felt totally free to give my opinion and ask questions when I did not understand certain things. I could feel myself building real relationships with them and felt that they have truly respected me. My co-workers were interested in Korean things, so we went out as a company to Korean restaurant so that I could talk about Korean culture and food. I know that not all the companies in the US have the same culture, but the culture of CrossRealms is totally different from Korea, in a positive way.

 

I really appreciate having been a part of CrossRealms though it was only for 6 weeks. Thanks to Usama, Sam, Unme, Johanna, Candice, Matt, Constantine, and Jasper! I hope to see you guys again.

 

안녕하세요, CrossRealms에서 마케팅 인턴으로 일하고 있는 한국에서 온 고다영입니다. 인턴십 과정이 포함된 프로그램으로 IIT에서 방문학생으로 수학 중입니다. 이 인턴십 과정이 6주로 예정되어 있기 때문에 이번주가 CrossRealms에서의 마지막 주가 될 것입니다.

 

CrossRealms에서의 지난 몇 주를 회고해보면 딱 한 문장으로 설명할 수 있을 것 같습니다. “많은 것을 배웠습니다.” 창업자인 Usama를 만났던 첫 날을 떠올리면 그가 저에게 했던 말이 기억에 남습니다. 그가 제게 기대하는 유일한 것은 제가 여기서 배울 수 있는 최대한 많은 것을 배우고 다양한 문화권의 동료들과 저의 문화를 나누는 것이라고 했습니다. 그 결과로 저는 제가 일했던 마케팅 분야뿐만이 아니라 문화적 차이와 인간관계를 맺어가는 과정을 배울 수 있었습니다.

 

이 회사에서 좋았던 점 중 하나는 모든 동료들이 서로의 의견에 스스럼없이 자신의 생각을 말합니다. 제가 어떤 업무를 하고 있던 상관없이 저는 거의 모든 마케팅 회의에 참여했는데

 

동료들과 상사는 항상 제 견해를 물어봤고 회의 중에 제 의견을 피력할 수 있도록 도와주었습니다. 그 결과로 자유롭게 제 의견을 말하고, 궁금한 것이 있거나 이해가 안됐을 때 스스럼없이 질문할 수 있었습니다. 저는 그들과 개인적인 인간관계를 형성해가고 있음을 느꼈고 그들이 저를 존중해준다는 느낌을 받았습니다. 또한 모든 동료들이 한국에 관심이 많아서 다같이 한국음식점에 가서 문화와 음식을 나누기도 했습니다. 미국의 모든 회사의 문화가 이와 같지 않은 것은 사실이지만, CrossRealms의 사문화는 완전히 긍정적인 의미에서 한국과는 몹시 달랐습니다.

 

비록 6주에 불과했지만 그 동안 CrossRealms의 구성원이 될 수 있음에 감사했습니다. 제 인턴십 과정은 끝났지만 저는 계속해서 CrossRealms와 함께 나아갈 예정입니다. Usama, Sam, Unme, Johanna, Candice, Matt, Constantine, 그리고 Jasper까지 모두에게 고맙습니다! 다시 만나기를 기대할게요

Written by Dayoung KO

글: 고다영

Plants with Sam: Snake Plants

Jul 16, 2019 by Sam Taylor

Hi, Sam here with the latest installment of Plants with Sam! As a reminder, I’m starting a new blog series on the care of plants to complement CrossRealms’ Let’s Grow initiative. My first post with more details can be found here.

 

Today I would like to talk about the snake plant. The snake plant is a part of the Sansevieria family and another common nickname for it is mother-in-law’s tongue.

 

Sansevierias are perfect plants for those who tend to be forgetful, and don’t always water their plants. I just recently got one of my own and love the way it looks. We have one in the office as well. They can grow to be pretty large so they make great floor plants when they are older.

 

Snake plants are very tolerant and can survive most conditions, including low levels of light, as well as drought and just being ignored in general.

Although it is easy to care for, here are a few tips and tricks to keep your snake plant happy:

Light

While sansevierias can handle any light and can handle low light or full sun, it is best to give it indirect light.

Water

Because snake plants are considered succulents, they can be very susceptible to rot. It’s best to not water too often, and barely any water at all during the winter. Try to let the soil dry out completely between waterings.

Soil

Most soils that drain quickly would work fine for these plants, but since they originate from the desert, sandier soils will work best.

Temperature

Temperatures between 55 and 85 degrees Fahrenheit are best. Anything below 50 will damage the plant.

Fertilizer

Feed with a mild cactus fertilizer once during the growing season or a balanced liquid slow-release (10-10-10 fertilizer) diluted to half-strength. Don't fertilize in the winter.

If you follow these tips, your snake plant will have no issues being happy and healthy! Don’t forget to stay tuned for more plant care tips!

COHESITY MSP SOLUTION VS RIVALS

Jul 11, 2019 by Sam Taylor

Cohesity vs. Rival Solution:

Comparison from a Business Continuity Perspective

The Business Continuity field is saturated with different solutions, all promising to the do the same thing- keep your business running smoothly and safely post-disaster. But how do you weed through the options to determine which solution is best, and what criteria should you use to do this?

The idea for this blog post came about during a recent visit to a newly acquired client, who was using one of the many solutions for Business Continuity. After asking about the service, our client realized that they had bought it based on affordability, but did not actually analyze the service – and whether it’s good enough for their business. Below, we’ll explore the differences between the above-mentioned solution and Cohesity’s MSP solution (which we currently use at CrossRealms) from technical, process, and financial perspectives. We hope this information can help you think more critically about what’s involved in achieving optimal Business Continuity/Disaster Recovery.

Technical & Process

Let’s start with the functional differences between the rival and the Cohesity MSP solution. The following chart breaks it down:

Financial

The Cohesity pricing is around $250/TB per month, depending on the size of the backup and requirements, with a one-year minimum commitment. This includes unlimited machine licensing, cloud backup, and SSD local storage for extremely fast recovery. It also includes Tabletop exercises and other business functions necessary for a complete Business Continuity solution.

The rival solution pricing (depending on the reseller) is around $240/TB per month – including the local storage with limited SSD. This also includes unlimited machine licensing and file recovery. It does not include Tabletop exercises, local SSD, or remote connectivity to their data center by the users in case of catastrophic office failure.

Conclusion

Overall, Cohesity outshines competitors with regards to the initial backup/seeding and Test/Dev processes. While it is slightly more expensive, the extra cost is absolutely worth the added benefits.

We hope this post will start a conversation around what should be included or excluded from a Business Continuity plan, and what variables need to be considered when comparing different products. Please comment with any questions or insight – we’d love to hear your thoughts.

BUSINESS CONTINUITY IN THE FIELD: A SERIES OF CASE STUDIES BY CROSSREALMS

Jul 11, 2019 by Sam Taylor

Case Study #1: Rural Hospitals and New Technologies: Leading the Way in Business Continuity

The purpose of this series is to shed light onto the evolving nature of Business Continuity, across all industries. If you have an outdated plan, the likelihood of success in a real scenario is most certainly diminished. Many of our clients already have a plan in place, but as we start testing, we have to make changes or redesign the solution altogether. Sometimes the Business Continuity plan is perfect, but does not include changes that were made recently – such as new applications, new business lines/offices, etc.

In each scenario, the customer’s name will not be shared. However, their business and technical challenges as they relate to Business Continuity will be discussed in detail.

Introduction

This case study concerns a rural hospital in the Midwest United States. Rural hospitals face many challenges, mainly in the fact that they serve poorer communities with fewer reimbursements and a lower occupancy rate than their metropolitan competition. Despite this, the hospital was able to surmount these difficulties and achieve an infrastructure that is just as modern and on the leading edge as most major hospital systems.

Background

Our client needed to test their existing Disaster Recovery plan and develop a more comprehensive Business Continuity plan to ensure compliance and seamless healthcare delivery in case of an emergency. This particular client has one main hospital and a network of nine clinics and doctor’s offices.

The primary items of concern were:

  • Connectivity: How are the hospital and clinics interconnected, and what risks can lead to a short or long-term disruption?
  • Medical Services: Which of their current systems are crucial for them to continue to function, whether they are part of their current disaster recovery plan, and whether or not they have been tested.
  • Telecommunication Services: Phone system and patient scheduling.
  • Compliance: If the Disaster Recovery system becomes active, especially for an extended period of time, the Cyber Security risk will increase as more healthcare practitioners use the backup system, and, by default, expose it to items in the wild that might currently exist, but have never impacted the existing live system.

After a few days of audit, discussions, and discovery, the following were the results:

Connectivity: The entire hospital and all clinics were on a single Fiber Network which was the only one available in the area. Although there were other providers for Internet access, local fiber was only available from one provider.

Disaster Recovery Site: Their current Business Continuity solution had one of the clinics as a disaster recovery site. This would be disastrous in the event of a fiber network failure, as all locations would go down simultaneously.

Partner Tunnels: Many of their clinical functions required access to their partner networks, which is done through VPN tunnels. This was not provisioned in their current solution.

Medical Services: The primary EMR system was of great concern because their provider would say: “Yes, we are replicating the data and it’s 100% safe, but we cannot test it with you – because, if we do, we have to take the primary system down for a while.” Usually when we hear this, we start thinking “shitshows”. So, we dragged management into it and forced the vendor to run a test. The outcome was a failure. Yes, the data was replicated, and the system could be restored, but it could not be accessed by anyone. The primary reason was the fact that their system replicates and publishes successfully only if the redundant system is on the same network as the primary (an insane – and, sadly – common scenario). A solution to this problem would be to create an “Extended LAN” between the primary site and the backup site.

Telecommunication: The telecommunication system was not a known brand to us, and the manufacturer informed us that the redundancy built into the system only works if both the primary and secondary were connected to the same switch infrastructure.

Solution Proposed

CrossRealms proposed a hot site solution in which three copies of the data and virtual machines will exist: one on their production systems, one on their local network in the form of a Cohesity Virtual Appliance, and one at our Chicago/Vegas Data Centers. This solution allows for instantaneous recovery using the second copy if their local storage or virtual machines are affected. Cohesity’s Virtual Appliance software can publish the environment instantaneously, without having to restore the data to the production system.

The third copy will be used in the case of a major fiber outage or power failure, where their systems will become operational at either of our data centers. The firewall policies and VPN tunnels are preconfigured – including having a read-only copy of their Active Directory environment – which will provide up-to-the-minute replication of their authentication and authorization services.

The following are items still in progress:

  • LAN Extension for their EMR: We have created a LAN Extension to one of their clinics which will help in case of a hardware or power/cooling failure at their primary facility. However, the vendor has very specific hardware requirements, which will force the hospital to either purchase and collocate more hardware at our data center, or migrate their secondary equipment instead.
  • Telecom Service: They currently have ISDN backup for the system, which will work even in the case of a fiber outage; once the ISDN technology is phased out in the next three years, an alternative needs to be configured and tested. Currently there will be no redundancy in case of primary site failure, which is a risk that may have to be pushed to next year’s budget.

Lessons Learned

The following are our most important lessons learned through working with this client:

  • Bringing management on board to push and prod vendors to work with the Business Continuity Team is important. We spent months attempting to coordinate testing the EMR system with the vendor, and only when management got involved did that happen.
  • Testing the different scenarios based on the tabletop exercises exposed issues that we didn’t anticipate, such as the fact that their primary storage was Solid State. This meant the backup solution had to incorporate the same level of IOPS, whether local to them or at our data centers.
  • Run books and continuous practice runs were vital, as they are the only guarantee of an orderly, professional, and expedient restoration in a real disaster.

100,000+ MALICIOUS SITES REMOVED WITHIN LAST TEN MONTHS

Jul 11, 2019 by Sam Taylor

Amidst a news cycle rife with malware incidents and cyberattacks, there is one shining spot of hope: 100,000 malware sites have been reported and taken down within the last year.  

Abuse.ch, a non-profit cybersecurity organization, has spearheaded a malicious URL hunt known as the URLhaus intiative. First launched in March 2018, a small group of 265+ security professionals have been searching for sites that feature active malware campaigns. These reported sites are passed down to information security (infosec) communities, who work to blacklist or take down URL’s completely.

While abuse reports are rolling in, there has been slow action on the web hosting provider’s part. Once a provider has been reported to have a malicious site, they need to take action in removing or altering the site. Average times to remove the malware infected site has been reported to be 8 days, 10 hours, and 24 minutes– a generous time delay that allows the malware to infect even more end users.

Heodo is one of the most popular malwares used, a multi-faced strain that can be utilized as a downloader for a variety of other attacks, acting as a spam bot, banking trojan, or a credentials swiper.

While sites aren’t responding with a particular deftness, it is still quite a feat to gather all these malicious URL’s with the power of such a limited group of researchers.

FROM THE TRENCHES: 3CX SECURITY

Jul 11, 2019 by Sam Taylor

This past month one of our clients experienced a security compromise with their phone system, where 3 extensions had their credentials swiped. Among the information taken was the remote phone login information, including username, extension and password for their 3CX phone system.

Our first tip off of the attack was the mass amount of international calls being made. We quickly realized that this was not your traditional voicemail attack, or SIP viscous scanner attack because the signature of it was different (more below). To alleviate the situation we immediately changed their login credentials, but to our surprise the attack happened again with the same extensions within minutes of us changing their configuration.

For those of you thinking that the issue can be related to a simple or easy username and password (extension number and a simple 7-digit password), that wouldn’t be the case here. It’s important to note that with 3CX version 15.5 and higher, the login credentials are randomized and do not include the extension id, which makes it a lot harder to guess or brute force attack.

We locked down International dialing while we investigated the issue, and our next target was the server’s operating system. We wasted hours sifting through the logs to see if there were any signs of attack, but absolutely none were present. We next checked the firewall and again saw no signs of attack– so how was this happening? How were they able to figure out the user ID and password so quickly and without triggering the built-in protections that 3CX has, like blacklisting IP addresses and preventing password guessing attempts?

Right back to square one, we needed more information. After contacting different contacts of the client, we found out that the three extensions were present at an International venue, which interestingly enough, was the target of all the International calls!!! Phew, finally a decent clue. Under the assumption of a rogue wireless access point present at the hotel, we asked them to switch to VPN before using their extension, which stopped any new authentication fields from being guessed  – – –

While we were able to get our client up and running again, there was something a bit more interesting going on here. The hackers were using a program to establish connections and then use those connections to allow people to dial an International country on the cheap (margins here are extraordinary). That program is using an identifier “user_agent” when establishing a connection to make the calls. If we filter for that, they will have to redo their programming before they can launch the attack again, which proved to be a quick and instantaneous end to this attack irrespective of source– even if they acquire the necessary credentials.

Here’s how I would deal with this next time, in 3CX you can follow the following steps:

Go to

  1. Settings
  2. Parameters

3. Filter for “user_agent”

4. Add the user agent used (The Signature) in the attack to either fields and restart services

Eg. The Signature (Ozeki, Gbomba, Mizuphone)