thumb qlc slc 0
Share

Tutorial: Transforming a QLC SSD into an SLC SSD – Dramatically increasing the drive’s endurance!

In today’s article, we’re embarking on something unprecedented! We’ll guide you step by step through the process of transforming an SSD equipped with QLC NANDs into an SLC SSD, significantly enhancing its durability and overall performance!

Especification of the DUT SSD:

SSD Lineup US

The SSD I chose is a Crucial BX500, which we’ve tested numerous times both on our website and on my YouTube channel.

ATENTION: BEFORE YOU CONTINUE READING!!!

Firstly, this procedure is safer than overclocking, but it still requires caution. Only proceed if you are genuinely interested, as I cannot be held responsible if any steps are executed incorrectly. I will explain as clearly as possible to minimize any misunderstandings.

This voids the warranty of any SSD. AND REMEMBER, WHEN FLASHING THE FIRMWARE TO THE SSD, ALL DATA WILL BE ERASED, so be sure to back up your devices before proceeding with anything.

NECESSARY TOOLS

To perform this procedure, it was necessary to use an adapter SATA to USB 3.0 adapter with the Jmicron JMS578 Bridge Chip model.

In addition to, we also need a clamp to perform the short on the ROM/Safe Mode terminals on the SSD’s PCB.

Technical Specs

Before we move on to the tutorial, let’s analyze this SSD a little further.

Controller

The SSD controller is responsible for handling all data management tasks, including over-provisioning and garbage collection, among other background functions. Naturally, this contributes to the SSD’s overall performance.

ssd crucial bx500 500gb controlador

In this project, the SSD utilizes the Silicon Motion controller model SM2259XT2, which is a new variant of the SM2259XT.

In this case, it’s a single-core controller, meaning it has one main core responsible for managing the NANDs, with a 32-bit ARC architecture, not ARM as we’re accustomed to. This controller has an operating frequency up to 550 MHz, but as we’ll see in the following image, in this project, it was operating at 437.5 MHz.

This controller also supports up to 2 communication channels with a bus speed of up to 800 MT/s, where each of these channels supports up to 8 Chip Enable commands, allowing the controller to communicate with up to 16 Dies simultaneously using the interleaving technique.

Screenshot 2024 03 02 111225

What was different from its predecessor, the SM2259XT, which had 4 channels and 4 C.E. supporting a maximum of 16 dies.

DRAM Cache or H.M.B.

Every top-of-the-line SSD aiming to deliver consistent high performance requires a buffer to store its mapping tables (Flash Translation Layer or Look-up table). This enables better random performance and responsiveness.

Being a DRAM-Less SATA SSD, it doesn’t support Host Memory Buffer (HMB) technology.

NAND Flash

Regarding its storage integrated circuits, the 500GB SSD has 2 NAND flash chips labeled “NY240,” which when decoded yield the NANDs “MT29F2T08GELCEJ4-QU:C” from the American manufacturer Micron, model N48R Media Grade. In this case, they are 1Tb (128GiB) dies containing 176 layers of data and a total of 195 gates, resulting in an array efficiency of 90.2%.

ssd crucial bx500 500gb nand flash

In this SSD, each NAND Flash contains 2 dies with 1Tb of density, totaling 256GB per NAND, resulting in a total of 500GB. They communicate with the controller using a bus speed of 262.5 MHz (525 MT/s), which is considerably below what the NANDs are capable of. These N48R dies are capable of operating at 800 MHz (1600 MT/s).

There are several reasons why they might be running so low, such as the manufacturer opting to reduce power consumption and heat. Or even this batch of NAND Flash not being able to pass Micron’s Quality Control at higher frequencies and ends up being sold cheaper or perhaps has a lower endurance as well, which generally results in lower NAND costs, enabling SSDs like this to have a very low price.

SOFTWARE UTILIZED FOR THIS PROJECT

As this is a Silicon Motion controller, we will be using a mass production tool from them, known as MPTools. It’s worth noting that these softwares are NOT provided by the manufacturers but are LEAKED by individuals with access, and posted on Russian or Chinese forums.

image 1

For this project, we will use the “SMI SM2259XT2 MPTool FIMN48 V0304A FWV0303B0“, which needs to be compatible with both the controller and the NAND Flash, and this tool allows us to do that.

image 2

Before making any modifications, we need to retrieve certain parameters from the SSD to preserve them. These values in the software are a preset from another SSD that may have different parameters. We need to obtain the following parameters:

  • Flash IO Driving with it’s subivisons
  • Flash Control Driving
  • Flash DQS/Data Driving

These parameters use hexadecimal values and must be changed according to the desired speed that we will configure for the SSD.

We also have many more parameters such as:

  • Control ODT (On-die Termination)
  • Flash ODT (On-die Termination)
  • Schmitt Window Trigger

To get these parameters, we need to go to the main screen of MPTools as shown below:

image 3

And then we’ll click on “Scan,” which will scan all compatible disks in the system:

image 4

After this, the SSD will be shown on port 1 if everything has gone smoothly so far, remembering that it’s not necessary to put the SSD in Safe Mode/ROM Mode yet.

image 5

Now we double-click on this Blue Name “Ready (FW: M6CR061, MN48R)“, which, when clicked twice, will open this new screen with SSD information.

image 6

Then we should click on both Card mode and CID Settings to see all the parameters that the SSD comes with from the factory.

image 7

After noting these parameters, we also see here the speed of the controller and the NAND, which for the sake of a fair comparison, we will leave at these same frequencies.

Applying Configurations

Initially, we should click on the “Edit Config” button in the top right corner, and the default password is “space 2x,” which is literally ” “.

Screenshot 2024 03 06 142821

After enabling the options to configure the SSD, let’s start by giving a name to this project. In the “Model Name:” field, we’ll enter the name that the SSD will have. This one was named “SSD SLC Test.”

Next, we’ll add a tag to this new firmware. In the red rectangle number 3, we’ll go to the “Firmware Version:” field and enter whatever we desire. I used “SSD-SLC” as an example.

Next, we arrive at one of the most crucial parts, the section on signal integrity, as all these other parameters are sensitive and must be adjusted precisely.

Let’s start with the top 2 parameters, “Flash Control Driving (hex)” and “Flash DQS/Data Driving (Hex)“. As we saw in the previous images, these parameters come with values of 66 in hexadecimal, so we will keep them. These 2 parameters can be found in the images below:

image 8

After configuring these 2, let’s move on to the frequencies. As we can see in the image below, we take these 2 values and set them. The CPU in this software came by default at 500 MHz while the NAND at 250 MHz. The NAND will increase the clock slightly and the CPU will decrease, I will not overclock here for a fair comparison. Next, we’ll leave the Output driving at 03H, which is the signal closest to 04H that the SSD had.

image 10

Next, we have the last 3 settings to resolve: Flash ODT, Control ODT, and Schmitt Window. In this case, we apply the values circled in red in each of these parameters in their respective fields.

image 16

Good, here we have reached the end of another stage of this procedure. And we begin the next following step, which is the modification of the software. Because by default, this version of MPTools would not support this modification.

Initially, we need to go to the directory of this program in the “UFD_MP” folder located in the root directory.

image 19

Inside this folder, we should look for the file named “Setting.set,” which is a configuration file of MPTools. Let’s open it using the Windows Notepad.

image 20

With the file open, we’ll make 2 modifications, the first one being in the section “[Function]“, where we have the configuration named “ENFWTAG=1,” which we should change its logical level from 1 to 0.

image 22

The other configuration is in the category “[Option]“, where we will add one more extra command line. This command is as follows: “EnSLCMode=1“. So after that, we save the file and reopen the MPTools.

image 23

With MPTools open, we can see that in the “Select Procedure” section, there is now an option called “Force SLC Mode“, which we should check. But let’s take it easy because we haven’t finished the modifications yet. There’s no point in trying to write this new firmware to the SSD if it’s still going to operate in its native mode, whether it’s TLC or QLC.

image 24

Now we’ve reached the crucial part that enables all these modifications we’ve made to become possible. We need to take the boot and firmware initialization files from a folder within MPTools and place these files in another directory of the program.

First, we return to the default directory of MPTools and open the “Firmware” folder within the software.

image 25

Inside this folder, we will find one named “2259,” which refers to the SM2259XT2 controller of this SSD. Within this folder, there should be another folder named “IMN48” along with a configuration file and parameters file.

image 27

Once again, we enter this IMN48 folder, where we will encounter numerous files and folders.

image 29

Let’s move forward and open the “00” folder, then select all the files and folders within the “00” folder.

image 31

We will “copy” (not cut) to the previous folder, the “00” folder, which should look like the following image:

image 33

And then we should enter the “XT2” folder and copy this single file inside it called “BootISP2259.bin” to this “00” directory as shown in the next image.

image 35

Next, we’ll copy all these files from the folder and paste them into the previous “2259” directory as shown in the following image:

image 39

IT IS IMPORTANT TO NOTE THAT THIS PROCEDURE WITH THESE FILES IS FOR THIS KIT OF SM2259XT2 + NANDS N48R.

OTHER SSDS WITH DIFFERENT NANDS FOLLOW THE SAME PROCEDURE, BUT WITH DIFFERENT FOLDER NAMES. THE N48 FOLDERS WILL BE NAMED ACCORDING TO THE NAND MANUFACTURER, AS SHOWN IN THE EXAMPLE BELOW OF AN SSD WITH SM2259XT2 CONTROLLER + KIOXIA BiCS5 NANDs.

P.S.: Some NAND models may not be 100% compatible. So far, I’ve only tested with Intel and Micron NANDs.

04 1

Tendo deixado isto claro, agora sim voltamos ao programa MPTools, vamos em Parameter novamente e vamos checar todas as configurações anteriores para ver se ainda estão aplicadas.

image 37

If everything is correct, let’s go to the “Test” section next to “Parameter,” which is the program’s main screen. Now we should put the SSD into ROM mode. Let’s close the software again.

HOW MUCH DID THE ENDURANCE INCREASED?

To calculate durability precisely, we need the following information:

Write Amplification Factor

NAND: Program/Erase Cycle

SSD’s Capacity

With these 3 parameters, we can have a basic understanding of TBW (Terabytes Written), but remember that it’s an approximate value. For a more precise calculation, following the JEDEC JESD218A parameters would be necessary, which includes more complicated parameters like Wear-Leveling Efficiency (W.L.E.).

Using this basic calculation with the SSD in its default mode, we see that it has a TBW of 120TB, with a Program/Erase Cycle of these Media Grade N48R NANDs around 900 P.E.C. And how do I know this? I managed to access the datasheet of the NANDs. Taking this into consideration, we can reach the conclusion below, considering the calculation:

image 41

120 TB (TBW) = (900 P.E.C. x 0.5 TB)
———————-
X (W.A.F)

X = 3.75 W.A.F.

We see that based on this, the SSD’s WAF in its native form would be quite high, in the range of 3.75, when tested in practical scenarios it was close to 3.8 WAF.

Now, in pSLC mode, the parameters change. The NAND from this Die can withstand up to 60,000 P/E cycles according to the datasheet, and its capacity drops to 0.12TB (120GB). When I randomly tested the SSD, I noticed that its WAF was below 2, which improved significantly.

X TB (TBW) = (60.000 P.E.C. x 0.12 TB)
———————-
1.8 (W.A.F)

X = 4000 TB (TBW)

We see that the TBW has increased drastically, from 120TB(500GB QLC) to 4,000TB (120GB pSLC), which is an increase of over 3333%, more than 3000 percent.

TEST BENCH
– OS: Windows 11 Pro 64-bit (Build: 23H2)
– CPU: Intel Core i7 13700K (5.7GHz all core) (E-cores e Hyper-threading desabled)
– RAM: 2 × 16 GB DDR4-3200MHz CL-16 Netac (c/ XMP)
– Motherboard: MSI Z790-P PRO WIFI D4 (Bios Ver.: 7E06v18)
– GPU: RTX 4060 Galax 1-Click OC (Drivers: 537.xx)
– (OS Drive): SSD Solidigm P44 Pro 2TB (Firmware: 001C)
– DUT SSD: SSD BX500 “SLC-Test” 2TB (Firmware: My custom firmware)
– Chipset Driver Intel Z790: 10.1.19376.8374.
– Windows: Indexing disabled to avoid affecting test results.
– Windows: Windows updates disabled to avoid affecting test results
– Windows: Most Windows applications disabled from running in the background.
– Boot Windows: Clean Image with only Drivers
– Test pSLC Cache: The SSD is cooled by fans to prevent thermal throttling, ensuring it doesn’t interfere with the test results.
– Windows: Antivirus disabled to minimize variation in each round.
– DUT SSDs: Used as a secondary drive, with 0% of space being utilized, and other tests conducted with 50% of space utilized to represent a realistic scenario.
– Quarch PPM QTL1999 – Power consumption test: conducted with three parameters—idle, where the drive is left as a secondary, and after a period of idle, a one-hour write test is performed, and the average power consumption is recorded

CONTRIBUTIONS TO PROJECT LIKE THIS IN THE FUTURE

If you enjoyed this article and would like to see more articles like this, I’ll be leaving a link below where you can contribute directly. In the future, I plan to bring a comparison showing the difference in SLC cache sizes, transforming a QLC or TLC SSD into SLC, among many other topics.

Paypal – [email protected]

CRYSTALDISKMARK

We conducted synthetic sequential and random tests with the following configurations:

Sequential: 2x 1 GiB (Blocks 1 MiB) 8 Queues 1 Thread

Random: 2x 1 GiB (Blocks 4 KiB) 1 Queue 1/2/4/8/16 Threads

In these sequential scenarios, the difference is basically nonexistent because even with the pSLC Cache, the SSD already reaches its maximum bandwidth and the manufacturer’s sequential speeds. Not to mention that this is a quick test; in a more extensive and heavy benchmark, we will see that there will indeed be a difference.

In terms of latency, there was indeed a considerable drop because when the SSD is in Idle, its NANDs, when they start to write or “read,” are in native mode, which would be QLC, and until they are reprogrammed to SLC, they have a certain latency. However, with the SSD in full pSLC mode, this latency is much lower because it always stays in pSLC mode.

The same happens with its random speeds; we can see that there was a greater difference in these benchmarks compared to sequential speed, where the difference was almost negligible.

The same happens at QD1; we can see that in reading, the SSD had an increase of over 16% in its speeds, while in writing, there was a much larger increase of over 30%.

ATTO Disk Benchmark QD1 and QD4

We conducted a test using ATTO to observe the speed of SSDs at various block sizes. In this benchmark, it was configured as follows:

Block sizes: from 512 Bytes to 8 MiB

File size: 256MB

Queue Depth: 1 and 4.

The ATTO Disk Benchmark is a software that performs a sequential speed test with compressed files. Therefore, for a simulation under a data transfer load like in Windows, we typically see block sizes ranging from 128KB to 1MB. Now, we observe that the SSD in pSLC mode outperforms the SSD in its factory mode across all block sizes, which is impressive once again.

The same pattern repeated at queue depth 1, although the difference in some block sizes was slightly lower compared to a queue depth of 4.

3DMark – Storage Benchmark

In this benchmark, various storage-related tests are conducted, including game loading tests for games like Call of Duty Black Ops 4, Overwatch, recording and streaming with OBS of a gameplay at 1080p 60 FPS, game installations, and file transfers of game folders.

image027 1

In this benchmark focusing more on casual environments, we can see that even here, in a scenario fully representative of reality, there is indeed a performance difference, especially in latency. Although it may not be something entirely noticeable in everyday use in these “lighter” scenarios.

PCMARK 10 – FULL SYSTEM DRIVE BENCHMARK

In this test, the Storage Test tool was used along with the “Full System Drive Benchmark,” which performs light and heavy evaluations on the SSD.

pcmark10 fb og


Among these traces, we can observe tests such as:

  • Boot Windows 10
  • Adobe After Effects: Launching the application until it’s ready for use
  • Adobe Illustrator: Launching the application until it’s ready for use
  • Adobe Premiere Pro: Launching the application until it’s ready for use
  • Adobe Lightroom: Launching the application until it’s ready for use
  • Adobe Photoshop: Launching the application until it’s ready for use
  • Battlefield V: Loading time until the start menu
  • Call of Duty Black Ops 4: Loading time until the start menu
  • Overwatch: Loading time until the start menu
  • Using Adobe After Effects
  • Using Microsoft Excel
  • Using Adobe Illustrator
  • Using Adobe InDesign
  • Using Microsoft PowerPoint
  • Using Adobe Photoshop (Intensive use)
  • Using Adobe Photoshop (Lighter use)
  • Copying 4 ISO files, totaling 20GB, to a secondary disk (Write test)
  • Performing the ISO file copy (Read-write test)
  • Copying the ISO file to a secondary disk (Read)
  • Copying 339 JPEG files (Photos) to the tested disk (Write)
  • Creating copies of these JPEG files (Read-write)
  • Copying 339 JPEG files (Photos) to another disk (Read)
image034 1

In this scenario, which is a practical benchmark with a slightly greater focus on writing than 3DMark, as it is more productivity-oriented, it’s possible to notice the practical difference in day-to-day use. The difference was striking, almost twice the performance.

Adobe Premiere Pro 2021

Next, we used Adobe Premiere to measure the average time it takes to open a project of about 16.5GB with 4K resolution, 120Mbps bitrate, and full of effects until it was ready for editing. It’s worth noting that the tested SSD is always used as a secondary drive without the operating system installed, as this could affect the results, leading to inconsistencies.

image037 1

Here we can see that, as it is more of a scenario of sequential data reading from the project, the difference was almost negligible, just a variation between runs.

WINDOWS BOOT TIME AND GAME LOADING TIME

We compared the SSD with pSLC Cache and in pSLC Mode using the Final Fantasy XIV benchmark.

image038

The same happens with game loading times because the limitation lies in the game’s API, which differs from DirectStorage. This API is not optimized for us to feel a significant difference.

image042

The same can be said for Windows, as although it is a completely new system, it cannot take advantage of features like the one we applied to the SSD.

SLC CACHING

A large part of SSDs on the market currently utilize SLC Caching technology, where a certain percentage of their storage capacity, whether it’s MLC (2 bits per cell), TLC (3 bits per cell), or QLC (4 bits per cell), is used to store only 1 bit per cell. In this case, it’s used as a write and read buffer, where the controller starts writing, and when the buffer is depleted, it writes to the native NAND Flash (MLC/TLC/QLC).

image 40
image043

Through IOmeter, we can get an idea of the SLC cache volume of this SSD, as manufacturers often do not provide this information. Based on the tests we conducted, it was found that it has a pSLC cache volume that appears to be dynamic, relatively small, around 45GB. It managed to maintain an average speed of approximately 493MB/s until the end of the buffer, which is a good speed considering it is a SATA SSD.

However, after writing 45GB, it begins to enter the folding process because it allocated all its capacity to work as pSLC. So now we see the true Achilles’ heel of QLC SSDs. Its sustained speed was quite low, averaging around 50 MB/s.

image045

Now, when we transform this SSD into pSLC, we see that it writes to its full capacity of 120GB at an average of 498 MB/s. And to confirm, we wrote up to 500GB to the SSD, and even then, it continued rewriting its capacity more than 4 times at almost 500 MB/s.

image047

As we can see in the graph above, we averaged the SSD’s write speed, combining the speed within the pSLC Cache + Folding + Native. Taking this into account, we see that the difference was striking, almost 10 times higher.

FILE COPY TEST

In this test, the ISO files and CSGO were copied from a RAM Disk to the SSD to see how it performs. The Windows 10 21H1 ISO of 6.25GB (1 file) and the CSGO installation folder of 25.2GB were used.

image049

In a more realistic test like this, we can see that there is no difference because the SLC cache volume of the SSD natively is larger than the size of the tested file.

image051

And even when using a larger folder, it is still smaller than the volume of the SSD’s SLC cache. I don’t test with larger files because I use a RAM Disk, and since I only have “32GB” to make a larger RAM Disk, I would need more RAM.

TEMPERATURE TEST

In this part of the analysis, we will observe the temperature of the SSD during a stress test, where the SSD receives files continuously, to determine if there was any thermal throttling with its internal components that could cause a bottleneck or loss of performance.

image053

The SSD doesn’t even heat up because it’s a low-power consumption SSD, as we’ll see throughout the analysis, and I believe this sensor to be the NAND Flash sensor.

POWER CONSUMPTION AND EFFICIENCY

SSDs, like many other components in our system, have a certain power consumption. The most efficient ones can perform tasks quickly with relatively low power consumption, allowing them to transition back to idle power states where they consume less energy.

quarch programmable power module
SPECIAL THANKS FOR QUARCH FOR SENDING THIS UNIT

In this section of the analysis, we will use the Quarch Programmable Power Module provided by Quarch Solutions (pictured above) to conduct tests and determine how efficient the SSD is. This methodology involves conducting three tests: measuring the maximum power consumption of the SSD, calculating an average power consumption in practical and casual scenarios, and measuring power consumption during idle periods.

This set of tests, especially those related to efficiency and idle power consumption, is important for users who intend to use SSDs in laptops. SSDs spend the vast majority of their time in low-power states (idle), so understanding their power consumption characteristics can significantly impact battery life and overall energy efficiency.

image061

We can see that thanks to this modification, its efficiency has increased dramatically. This occurred because, although the difference in power consumption was not as significant as we will see shortly, the speed in MB/s was extremely high.

Due to the benchmark exceeding the SSD’s 45GB cache by a large margin, it spent a significant portion of the test at a very low speed of less than 55 MB/s, resulting in low efficiency. In pSLC mode, it was able to write twice its capacity at even lower power consumption than in QLC mode, and its bandwidth did not drop at any point. This led to the significant difference in power consumption.

image055

Although this SSD naturally has low power consumption, we do observe a decrease when transforming it into pSLC mode. This occurs because SLC NANDs have only 2 logical levels, meaning the threshold voltage required to allow electrons to flow in the gate channel of each cell is lower since there are fewer levels needed to represent binary 1 or 0. In contrast, QLC NANDs have 16 logical levels, requiring a higher threshold voltage. This explains the reduction in power consumption.

image057

Once again, we see this in the average of both SSDs.

image059

Last but not least, the Idle test, which represents the scenario where the vast majority of SSDs are in everyday use. Here we can see that it had even lower consumption in Idle. Another positive point.

What can we conclude from this?

Once again, I stress the importance of caution with this procedure as it can indeed go wrong if not done correctly. However, we see that the differences are significant in some scenarios while more subtle in others. But now, in terms of durability, the difference is immense!

You may also like...

Deixe uma resposta

Discover more from The Overclock Page

Subscribe now to keep reading and get access to the full archive.

Continue reading