US5592310A - Image processing method and apparatus - Google Patents

Image processing method and apparatus Download PDF

Info

Publication number
US5592310A
US5592310A US08/345,327 US34532794A US5592310A US 5592310 A US5592310 A US 5592310A US 34532794 A US34532794 A US 34532794A US 5592310 A US5592310 A US 5592310A
Authority
US
United States
Prior art keywords
color
image
pattern
image processing
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
US08/345,327
Inventor
Takashi Sugiura
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to US08/345,327 priority Critical patent/US5592310A/en
Application granted granted Critical
Publication of US5592310A publication Critical patent/US5592310A/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/40Picture signal circuits
    • H04N1/40012Conversion of colour to monochrome

Definitions

  • the present invention relates to an image processing method and apparatus, and more particularly to an image processing method and apparatus which is applicable to a printer, digital copier or facsimile which prints image data from an image scanner, a computer or the like.
  • an original image is irradiated by a light source such as a halogen lamp and the reflection of the original image is read by a solid image pickup device such as a CCD (charge coupled device).
  • the image signal which was photoelectrically transformed by the solid image pickup device is further transformed into a digital signal.
  • the corrected image signal is output to a printer, a laser beam printer, thermal printer, or ink jet printer.
  • the recording image is formed on a recording medium such as recording paper.
  • an image processing apparatus which inputs a color image signal, and generates and outputs pattern data corresponding to the color of the color image signal, comprising: color discrimination means for inputting a color image signal and discriminating the colors of the color image signal; pattern generation means for generating a pattern which is predetermined to correspond to the color discrimination signal from the color discrimination means; and brightness conversion means for converting the brightness of the pattern in accordance with the result of the color discrimination.
  • an image processing method which inputs a color image signal, and generates and outputs pattern data corresponding to the color of an image signal, comprising the steps of: inputting the color image signal and discriminating the color of the color image signal; generating the predetermined pattern corresponding to every color; and converting the brightness of the pattern according to the result of the color discrimination of the color image signal.
  • FIGS. 1A and 1B are block diagrams illustrating the circuit construction of an image processing apparatus of a digital copier according to a first embodiment of the invention
  • FIG. 2 is a diagram illustrating the construction of a CCD sensor
  • FIG. 3 is a timing chart of a CCD driving pulse signal
  • FIG. 4 is a block diagram illustrating the construction of a CCD driving pulse generation circuit
  • FIG. 5 is a timing chart of the CCD driving pulses
  • FIG. 6 is a block diagram illustrating the construction of a black correction circuit of a digital copier according to the present embodiment
  • FIG. 7 is a diagram illustrating the concept of a black correction in a black correction circuit of the digital copier according to the present embodiment
  • FIG. 8 is a block diagram illustrating a white correction circuit of the digital copier according to the present embodiment.
  • FIG. 9 is a diagram illustrating the concept of the white correction in the white correction circuit of the digital copier according to the present embodiment.
  • FIG. 10 is a format diagram of data for a white board in the white correction circuit of the digital copier according to the present embodiment
  • FIG. 11 is a flowchart illustrating the procedure of white correction in the white correction circuit of the digital copier according to the present embodiment
  • FIG. 12 is a diagram of the brightness signal generator of the digital copier according to the present embodiment.
  • FIG. 13 is a diagram illustrating a color space where color discrimination is performed by hue
  • FIG. 14 is a block diagram illustrating the construction of a color discrimination circuit of the digital copier according to the present embodiment
  • FIG. 15 is a block diagram illustrating the construction of a pattern generation circuit of the digital copier according to the present embodiment
  • FIGS. 16A-16F are the color patterns corresponding to each color which is output from the pattern generation circuit of the digital copier according to the present embodiment
  • FIG. 17 is a block diagram illustrating the construction of the pattern composite circuit of the digital copier according to the present embodiment.
  • FIG. 18 is a diagram for explaining the effect in the color border area in the digital copier according to the present embodiment.
  • FIG. 19 is a diagram for explaining the edge adding effect in the digital copier according to the present embodiment.
  • FIG. 20 is a block diagram illustrating the construction of the pattern composite circuit of the digital copier according to the other embodiment.
  • FIGS. 1A and 1B are block diagrams illustrating the circuit construction of the image processing apparatus of the digital copier according to the present embodiment.
  • numeral 101 refers to a CCD (charge coupled device), an image sensor (color reading sensor), for converting the color original image, which is formulated on the image reading face of the CCD 101 using color separation filters, into electrical signals of G (green), B (blue), and R (red) through an optical system such as a rod lens.
  • the numeral 102 refers to an amplifying circuit which amplifies the image output signal from the CCD 101, which amplified image signal is output to a coaxial cable 103.
  • the numeral 104 refers to a S/H (sample and hold) circuit which performs a S/H on the color image signal outputted from the amplifying circuit 102 through the coaxial cable 103, and outputs the S/H signal as a color signal of G, B, and R.
  • the numeral 105 refers to an A/D (analog/digital) conversion circuit which converts the analog color image signal on which the S/H is performed in the S/H circuit 104 into a digital color image signal.
  • the numeral 106 refers to a position correction circuit which electrically corrects the reading position of each channel of the CCD 101
  • the numeral 107 refers to a black/white correction circuit which performs black level correction and white level correction (shading correction), to be described later, on the digital image signal.
  • numeral 108 refers to a brightness signal generator which generates brightness signals from the digital color image signal in which the black and white corrections have been processed.
  • the numeral 109 refers to a color discrimination circuit which discriminates the color of each pixel of the digital color image in which the black and white corrections have been processed.
  • the color discrimination circuit outputs the color discrimination signal 131 according to the hue signal and the shift signal 132 which indicates a shift from the center of a discrimination range (shift from the representative value of the hue).
  • the numeral 110 refers to a pattern generation circuit comprising a storage medium of a RAM or ROM which outputs a predetermined pattern corresponding to each color in accordance with the result of color discrimination by the color discrimination circuit 109.
  • the pattern generation circuit 110 is set to recognize the color discrimination signal 131 corresponding to the hue signal as a reading address and to output a pattern which is stored in the memory in advance.
  • the numeral 111 refers to a pattern composite circuit which outputs either the brightness signal 133 generated in the brightness signal generator 108 according to the HIT signal 135 or the pattern signal 134, indicating what color is generated from the pattern generation circuit 110, which is converted according to the shift signal 132.
  • the numeral 112 refers to a LOG convertor which converts a brightness signal from the pattern composite circuit 111 into the density signal and outputs the density signal to a connected printer.
  • section A in FIG. 1A which is enclosed by a chain line, corresponds to the video image processing circuit of the image reader (image scanner).
  • a full-color original image is exposed by a light source such as a halogen lamp or fluorescent lamp (not shown) and the reflected color image from the color original image is picked up by a color image sensor such as CCD. Then, an analog image signal which is obtained by the image sensor is digitized by a A/D convertor. The printed image is obtained in a manner such that after the digitized full-color image signal has been processed, the signal is output to a image forming apparatus such as a thermal printer, ink jet printer, or laser beam printer (not shown). The detailed process is described below.
  • the ink jet printer here includes a so-called bubble jet printer as shown in U.S. Pat. No. 4,723,129.
  • the color original is irradiated by an exposure lamp (not shown) and reflected light from original is separated into RGB by the color separation filers. Then, the signal is input into the color reading sensor 101 and is amplified to a predetermined level by the amplifying circuit 102.
  • the CCD 101 is driven by a clock signal which was generated by the system pulse generator (not shown).
  • FIG. 2 is a diagram illustrating the arrangement of each sensor tip (sensor device) of the color reading sensor 101 and FIG. 3 illustrates the timing of the driving pulse of each sensor device.
  • the sensor 101 comprises five sensor devices which are arranged in zigzags to read data in a manner such that the main scanning direction is divided into five parts.
  • each sensor device 63.5 ⁇ m is predetermined as one pixel and the pixel data for 1024 pixels can be read at a density of 400 dpi (dot/inch).
  • one pixel is divided into three parts, in the order of G, B, and R, in the main scanning direction.
  • each sensor device comprises 3072 (1024 ⁇ 3) effective pixels in total.
  • each sensor device 58a-62a is formed on a single ceramic substrate.
  • the image is scanned to the AL direction.
  • the 1st, 3rd, and 5th devices (58a, 60a, 62a) and the 2nd and 4th devices (59a, 61a) are independently synchronized and respectively driven by the driving pulse groups ODRV (118a) and EDRV (119a).
  • pulse signals O01A, O02A, and ORS included in the driving pulse group ODRV (118a), and pulse signals E01A, E02A, and ERS included in the driving pulse group EDRV (119a) are categorized as charge transfer clocks (O01A, O02A, E01A and E02A) and charge reset pulses (ORS and ERS).
  • These pulse groups are totally synchronized and generated so that jitter will not be generated for suppression of noise and interfaces between the 1st, 3rd, and 5th devices and the second and fourth devices. For this purpose, these pulses are generated in synchronization with a clock signal from the reference oscillating source OSC (not shown).
  • FIG. 4 is a block diagram illustrating the construction of the circuit which generates the aforementioned driving pulse groups ODRV (118a) and EDRV (119a).
  • FIG. 5 is a diagram illustrating the timing of the driving pulses. This circuit block is included in the system control pulse generator (not shown).
  • the clock KO (135a) which divides the source clock CLKO generated from a single reference oscillating source OSC (558a) is a clock signal generating the reference signals SYNC 2 and SYNC 3 which determines the timing of generating the sensor driving pulses ODRV and EDRV.
  • the output timing of the reference signals SYNC 2 and SYNC 3 are determined according to a set value of presettable counters 64a and 65a which are set by signal line 22 connected to the CPU omnibus.
  • the reference signals SYNC 2 and SYNC 3 respectively initialize a frequency divider 66a and driving pulse generator 68a, and a frequency divider 67a and driving pulse generator 69a.
  • each pulse group of ODRV (118a) and EDRV (119a) is obtained as synchronized signals without jitter.
  • the signal turbulence caused by the interface among the sensor devices (58a, 59a, 60a, 61a, and 62a) can be prevented.
  • the sensor driving pulse ODRV (118a) which was obtained by synchronization is supplied to the 1st, 3rd, and 5th sensor devices (58a, 60a, and 62a) and the sensor driving pulse EDRV (119a) is supplied to the 2nd and 4th sensors (59a and 61a).
  • Each of sensor devices independently output the video signals V1-V5 by synchronization with the driving pulse.
  • each of the video signals V1-V5 is amplified to a predetermined voltage by the amplifying circuits 501-1 through 501-5 which are independent among the channels.
  • Each of the video signals V1-V5 are amplified to a predetermined voltage by the amplifying circuits 501-1 through 501-5 as shown in FIG. 1A.
  • the signals of V1, V3, and V5 are transmitted at the timing of the OOS (129) and the video signals V2 and V4 are transmitted at the timing of the EOS (134) through the coaxial cable 103.
  • the voltage signals V1-V5 are input into the S/H circuit 104 of the video image processing circuit and held therein.
  • the analog color image signals which are sampled and held in every R, G, and B in the S/H circuit 104 are digitized in each channel of CH 1-CH 5 in the A/D conversion circuit 105.
  • the digitized signals are output to the position correction circuit 106 independently, but in parallel.
  • the position correction is performed in the sub-scanning direction by the position correction circuit 106 having the memory for a plurality of lines.
  • the black level output signals of the channels CH 1-CH 5 greatly varies among the sensor devices and pixels in the case where a small quantity of light which is input to the CCD sensor 101. If such image signal is output to a printer as it is, lines and unevenness may appear in the printed image data (including patterns to be described later).
  • the original image scanning unit Prior to an image reading operation, the original image scanning unit is moved to the position of a black board having an even density which is arranged at the tip of the original image stand which is out of the image reading area. Successively, a halogen lamp is lit, and the black board is read, and then the black level image signal is input.
  • the operation that the image data of the black level for one line is stored in the black level RAM 78a is described first.
  • the selector 82a selects the A input by the selection signal 604 and the gate 80a is closed by the gate signal 601, while the gate 81a is opened by the gate signal 602. That is, the data path is connected such as 151a ⁇ 152a ⁇ 153a.
  • the address input 155a of RAM 78a is initialized by the HSYNC and the A input of the selector 83a is selected by the selection signal 603 so that the output 154a of the address counter 84a which counts VCLK is input to the selector 83a and output as the address of the RAM 78a.
  • the black level signal for one line is stored in the RAM 78a (referred to as a black reference value reading mode).
  • the RAM 78a operates in a data reading mode and the black level data, which was read out in every one pixel in each line of the RAM 78a, is input to the B input of the subtracter 79a via the data line 153a ⁇ 157a. Then, the result of the operation (A input)-(B input) is output from the subtracter 79a. At this time, the gate 81a is closed by the gate signal 602, while the gate 80a is opened by the gate signal 601. Side A is selected by the selection signal 605 and the output is output to the side A.
  • the similar control is performed in the block 77aG or 77aR.
  • the image signals B out , G out , and R out on which the black correction is performed in each color component is output.
  • control signal of each selector gate and the gate signals 601-605 for the above-described controls are formed under control of a CPU (not shown) as the output of the latch 85a is allotted as an I/O of the CPU.
  • the CPU is enabled to access the RAM 78a when the side B is selected by the selectors 82a, 83a, and 86a.
  • the white level correction (shading correction) in the black/white correction circuit 107 is described.
  • This correction is to correct variation in the light system, optical system, and sensitivity of the sensor on the basis of white color data when the original image scanning unit is moved to a position to irradiate the white board, which when read has an even density.
  • the basic construction of white correction circuit is shown in FIG. 8.
  • the basic construction of this circuit is similar to that of the black correction circuit in FIG. 6.
  • the correction is performed in a manner such that a black image signal is stored in the RAM 78a and the value is output to the subtracter 79a. The correction is thus performed by subtraction.
  • the difference in the white correction circuit is that the white data is stored in the RAM 78b, and the value is output to the multiplier 79b and multiplied by the image signal. Since the operation in the other parts are similar to those of the black correction, an explanation is omitted.
  • the exposure lamp (not shown) is lit prior to the operation of either duplication or reading, and the white board is read.
  • the image data of the white level having an even density which was read in this way is stored in the correction RAM 78b which has memory capacity capable of storing the pixel data for one line. For example, if the main scanning direction has a width corresponding to the length of the longitudinal side of A4 size paper, 4752 (16 ⁇ 294 mm) pixels at 16 pel/mm, at least 4752 bytes, are necessary for the memory capacity of the RAM 78b.
  • the "FF H " indicates "255" in hexadecimal notation.
  • the CPU outputs the gate signals which open the gates 80b and 81b to the latch circuit 85b and the selection signals 801-805 which select the B input in the selectors 82b, 83b, and 86b.
  • the RAM 78b can be accessed by the CPU.
  • FIG. 11 is a flowchart illustrating the procedure for forming the white correction data and storing the data in the RAM 78b.
  • step S1 the pointer i is set to "0" and the white color data W i which is stored in the RAM 78b is read.
  • step S2 the operation of FF H /W i is performed on the white color data W i .
  • the white correction data is stored in the RAM 78 corresponding to each circuit of 77bB, 77bG, and 77bR in FIG. 8.
  • the side A is selected by the selection signals 803 and 805 in the selectors 83b and 86b.
  • the counter data FF H /W i which was read out of the RAM 78b is input to the multiplier 79b through the signal path 153b ⁇ 157b.
  • the quotient of FF H /W i is multiplied by the original image data 151b which was input from the A input terminal and the product (D i ⁇ FF H /W i ) is obtained and output.
  • the image data R out 121, G out 122, and B out 123 on which the black and white corrections have been performed are input to the brightness signal generator 108 and the color discrimination circuit 109.
  • the brightness signal generator 108 standardizes the filter image which was read by the CCD sensor 101 and forms an ND image. Refer to FIG. 12 for an explanation of the above-described operation.
  • the input image data R out 121, G out 122, and B out 123 are added by the adder 201, and the sum is divided by three by the divider 202, and the average is obtained and output as the brightness signal 133.
  • the color discrimination circuit 109 is now described.
  • the present embodiment utilizes a hue signal as a color discrimination signal. This enables the performance of an accurate color discrimination when the comparing colors are the same, and the intensity or saturation is different from one other.
  • the outline of the color detection method is first described.
  • Each of the input R, G, B data (R out , G out , and B out ) is 8 bits having the information of 2 24 colors in total. If such information carrying considerably large data is used as is, the apparatus must be expensive because of the size of the circuit. In order to reduce the cost, the reduction of the circuit size can be achieved in the following way.
  • the present embodiment utilizes the hue signal.
  • the hue according to the present embodiment is different from the ordinary hue, however, it is here referred to as "hue signal" for reasons of convenience.
  • the color space can be expressed in 3 dimensions comprising saturation, hue, and intensity such as Munsell's cube.
  • the common area of the R, G, B data that is, the minimum value min (R, G, B) of the R, G, and B indicates an achromatic color component
  • the min (R, G, B) is subtracted from each R, G, B data and the 3-dimensional input color space is converted into the 2-dimensional color space by utilizing the rest of information as chromatic color components.
  • the plain ring 0° ⁇ 360° to which the 3-dimensional space is converted is divided into six as shown in FIG. 13.
  • the hue is obtained by the comparison of R, G, B data such as the information of R>G>B, R>B>G, G>B>R, G>R>B, B>G>R, and B>R>G, the maximum and middle values in the inputted R, G, B data, and the LUT (look up table).
  • the detection circuit 1401 compares the sizes of R out , G out , and B out data on which the black/white corrections have been performed.
  • each input data is compared by the comparator and outputs a maximum (max) value, middle (mid) value, or minimum (min) value according to the result of comparison.
  • the output value of the comparator is determined and output as the rank signal 1414.
  • the min value is subtracted from the max and mid values by the subtracters 1402 and 1403 so that the achromatic color components are reduced from the max and mid values.
  • the subtracted value is input into the hue detection circuit 1404 with the rank signal 1414.
  • the hue detection circuit 1404 comprises the storage device capable of random access such as RAM or ROM.
  • the hue detection circuit 1404 comprises the look up table using ROM.
  • the values corresponding to the angles of plain are stored in advance as shown in FIG. 13 and the corresponding hue value is output from the input rank signal 1414, (max-min) value, and (mid-min) value.
  • the hue value outputted in this way is then input into the window comparators 1405 and 1406.
  • the value set in the window comparators 1405 and 1406 is the hue data value corresponding to the hue value which is set to the offset value by the CPU in the case where the hue value of the color data to be patternized by the data input means (not shown) is input. If the value which was set in the comparator 1405 is a 1 , it is arranged so that, in the case where (hue data)>a 1 , "1" is output from the comparator 1405 to the hue data which is input from the hue circuit 1404. If the value set in the comparator 1406 is a 2 (a 1 ⁇ a 2 ), it is arranged so that, in the case where (hue data) ⁇ a 2 , "1" is output from the comparator 1406. In this way, in the case where a 1 ⁇ (hue data) ⁇ a 2 , "1" is output from the AND gate 1410.
  • the CPU 1407 sets the middle value (a 1 +a 2 )/2 of the two values (a 1 and a 2 ), which are set in the window comparators 1405 and 1406, in the subtracter 1408. Then, the output value 131 from the hue detection circuit 1404, and the difference value (absolute value) between the output value 131 from the hue detection circuit 1404 and the middle value of the color discrimination range which was obtained by the subtracter 1408 is obtained and output as the shifted distance 132.
  • the difference value is input into the buffer 1409 and controlled by the output from the AND circuit 1410. That is, in the case where the output of the AND circuit 1410 is "1", the stored difference value is output from the buffer 1409 as the shifted distance 132. On the other hand, in the case where the output of the AND circuit 1410 is "0", where a 1 ⁇ (hue data) ⁇ a 2 is not satisfied, the output of the buffer 1409 becomes high impedance.
  • the section 14(a) which is enclosed by a dotted line corresponds to the construction of the circuit 14(b). According to the present embodiment, it is set so that two colors can be discriminated. If colors to be discriminated are added, the section which is enclosed by a dotted line 14(b) needs to be added. For the latter description, the sections which are enclosed by a dotted line are respectively referred to as 14(a) and 14(b). Furthermore, to be described later along with FIG. 15, the color discrimination signal 131 is 5 bits in order to discriminate five colors.
  • the outputs of the buffer 1409 and buffer 1411 are common.
  • the CPU 1407 needs to control values which are set in the window comparator so that the outputs of the buffer 1409 and buffer 1411 will not be active at the same time. In other words, it is set so that the window ranges in the sections enclosed in the dotted line should not overlap.
  • the common outputs of the buffers 1409 and 1411 are pulled up by the resistance array 1412. This is because the output level of the shifted distance 132 is to be at a high level in the case where the outputs of the AND circuit 1410 of the color detection circuits 14(a) and 14(b) are all "0". That is, in the case where the input R, G, and B image signals cannot be discriminated by any colors, the data in which all the bits are "1" are output as the shifted distance 132.
  • the dot pattern data corresponding to each color is written in advance as shown in FIGS. 16A-16F. Each pattern is set 16 ⁇ 16 dot as one pattern.
  • the ROM 803 for pattern selects the pattern data according to the color discrimination signal 131 which is input from the color discrimination circuit 109.
  • the pattern data generation processing is performed by repeatedly reading the data to the main scanning direction and sub-scanning direction.
  • the main scanning counter 802 synchronizes with the horizontal synchronizing signal HSYNC and is operated when the video clock CLK is counted.
  • the sub-scanning counter 801 synchronizes with the ITOP signal and is operated when the horizontal synchronizing signal HSYNC is counted.
  • the output data from the ROM 803 for patterns is 8 bits and the most significant bit MSB is utilized as the control signal (HIT signal 135) in the pattern composite circuit 111 which will be described later. Therefore, in the ROM 803 for patterns, the data that the MSB is normally "0", but "1" when the pattern is output as it is written in.
  • the ROM 803 for patterns can be replaced by RAM. If the RAM is used, the capacity and bit allotment of an address are similar to these of the ROM.
  • the shifted distance 132 which was output from the color discrimination circuit 109 is inverted by the inverter circuit 1004 and input into the pattern controller 1001.
  • the pattern signal 134 which was output from the pattern generation circuit 110 is converted to the density signal and output.
  • the shifted distance 132 is the 8 bit data
  • the pattern controller 1001 has eight AND circuits 1003 and each bit of the shifted distance is input into each AND circuit.
  • the pattern signal 134 of 1 bit is input into the other input terminals of each AND circuit.
  • the AND circuit 1003 is opened according to the value of the shifted distance 132 and 1 bit of the pattern signal 134 is converted to the 8 bits brightness signal.
  • the pattern information 1005 having the brightness according to the shifted distance 132 is output from the patten controller 100.
  • the numeral 1101 refers to an actual original image and the rectangles in red and yellow are described as the drawing.
  • a color discrimination error is caused at the border of red and yellow.
  • the drawing 1102 illustrates the case where the color is patternized and printed and the drawing 1103 is an enlarged drawing.
  • the illustrated patterns in FIG. 18 are different from the color patterns indicated in FIGS. 16A-16F.
  • the patterns for red and yellow coexist at their border and the output becomes not clear as a result.
  • the numerals 1104 and 1105 indicate the result obtained by the present embodiment.
  • the obscurity on the output which was caused by the coexistence of the patterns is eliminated by reducing the detection density at the border of the colors where the color discrimination error occurred.
  • the graph 1105 indicates the change of the pattern density in the pattern output which is indicated by the drawing 1104.
  • the y-axis represents density and x-axis represents the position information of the pattern which corresponds to the pattern output 1104.
  • the image data, which was composited by the pattern composite circuit 111, is input to the LOG convertor 112 as shown in FIG. 1 so that the brightness/density conversion is performed and converted into the density signal.
  • the brightness/density conversion is performed by the look up table using ROM.
  • the signal which was converted into a density signal in the LOG convertor 112 is output to a monochrome printer where the image is formed (e.g. laser beam printer).
  • the drawing 1501 represents the color original image.
  • the color separation is performed on the original image by the hue discrimination and each color is patternized.
  • the result is shown in the drawing 1502.
  • red is expressed in lateral stripe and yellow is in vertical stripe.
  • the drawing 1503 is the drawing that the border part is enlarged. At the border of the colors, the color discrimination error has occurred and the lateral stripe pattern and the vertical stripe patten are partially overlapped.
  • the border where the color discrimination error occurred becomes black as shown in the drawing 1504.
  • the drawing 1505 represents the whole image in this method. The black edge surrounds the areas of each color, and thus, the clear output image can be obtained.
  • the drawing 1506 illustrates the conventional method that the edge of the brightness information is extracted and it is emphasized. However, in this case, since there is little difference between red and yellow, an edge is not formed at the border of colors.
  • the advantage of the present embodiment is that an edge is added if a hue is changed.
  • FIG. 20 is a block diagram illustrating the construction of the pattern composite circuit according to the second embodiment, which corresponds to the pattern composite circuit 111 as shown in FIG. 17.
  • the brightness signal 133 which is generated in the brightness signal generator 108 and the pattern signal 134 which is generated in the pattern generation circuit 110 are selected by the selector 1601 according to the HIT signal 135. That is, the selector 1601 selects the pattern signal 134 in the case where the color is discriminated, while in the case where the color is not discriminated, the brightness information 133 is selected and output.
  • the shifted distance 132 which is generated from the color discrimination circuit 109 is compared with the threshold which is set by the CPU 1605 in the comparator 1604.
  • the shifted distance 132 is larger than the predetermined threshold (A input>B input)
  • "1" is output from the comparator 1604.
  • the result of comparison is inverted by the inverter 1603 and the output of the selector 1601 is controlled in the NAND gate circuit 1602.
  • the data "1" is output from the NAND circuit 1602 and the brightness information becomes "1", that is, the image is blackened.
  • the pattern signal 134 or the brightness information 133 output from the selector 1601 is selected and output to the LOG convertor 112.
  • the present invention can be applied to a system comprising a single device or a plurality of devices.
  • the present invention can also be applied to the case where the program executing the precessing which is directed by the present invention is supplied to a system or an apparatus.
  • the clear output is obtained because of reduction of the pattern density at the color border part of a plurality of colors and the color discrimination error is eliminated in a manner such that the color of the image data is discriminated and the distance from the center of the discrimination range of the input image data is reflected on the density of the pattern data expressing the discriminated color.
  • the shifted distance can be discriminated at multiple-stages and the density can be changed at the multiple-stages according to the shifted distance.
  • the density after the LOG conversion can be changed instead of the brightness signal as in the aforementioned embodiment.
  • processing can be arranged according to the shifted chroma rather than the shifted hue.

Abstract

An image processing method and apparatus which inputs a color image signal, and generates and outputs pattern data corresponding to the color of the image signal. The color of each pixel of the input color image signal is discriminated by a color discrimination circuit and the pattern data which corresponds to the color is output from the pattern generation circuit according to the result of discrimination. The brightness of the pattern data is adjusted according to a shift, which is the difference between the discriminated color and the color of the pixel. In the instance where the discriminated color is different from the predetermined color, or the proper color is not discriminated, the brightness signal of the pixel data is output as is. In this way, the coexistence of the pattern data at the border of the colors is de-emphasized.

Description

This application is a continuation, of application Ser. No. 07/888,586 filed May 26, 1992, now abandoned.
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to an image processing method and apparatus, and more particularly to an image processing method and apparatus which is applicable to a printer, digital copier or facsimile which prints image data from an image scanner, a computer or the like.
2. Description of the Related Arts
In a conventional digital copier, an original image is irradiated by a light source such as a halogen lamp and the reflection of the original image is read by a solid image pickup device such as a CCD (charge coupled device). The image signal which was photoelectrically transformed by the solid image pickup device is further transformed into a digital signal. After a predetermined correction processing is performed on the digital signal, the corrected image signal is output to a printer, a laser beam printer, thermal printer, or ink jet printer. Thus, the recording image is formed on a recording medium such as recording paper.
Since there is a case of copying a color original image by a monochrome digital copier, the image output of the color image is required to carry more information than the output of a black and white image. To copy such an image, a digital full-color copier and a one-point color copier capable of copying a multiple-color image in color have been developed.
On the other hand, in the case where a color image is read and the image data is output by the recording apparatus and printed in black and white, there is a copier which reads the color image by a color sensor, and the color of the original image is discriminated, and the pattern data corresponding to each color is output on the basis of the result of color discrimination. Thus, art in which the duplication is produced in a manner such that the difference of colors in the original image is expressed by the difference of the printed patterns is known.
However, in the above described apparatus which discriminates colors of an original image and duplicates the image by generating pattern data corresponding to the discriminated color, there is a drawback in that the printed result is not clear because, in the point where two colors are mixed or the gradation area where colors are continuously changing, the two kinds of patterns coexist at the changing point. This type of problem cannot be ignored when a color is incorrectly discriminated, particularly when a shift of color reading position has occurred in a scanner.
SUMMARY OF THE INVENTION
Accordingly, it is an object of the present invention to provide an image processing method and apparatus which de-emphasizes the coexistence of patterns in a color changing part of a color original image.
It is another object of the present invention to provide an image processing method and apparatus capable of avoiding the coexistence of patterns in the color changing part and outputting the patternized color of the original image.
It is another object of the present invention to provide an image processing method and apparatus which de-emphasizes the coexistence of patterns in the color changing part.
It is another object of the present invention to provide an image processing method and apparatus capable of automatically adding an edge to a area where colors are changing.
It is another object of the present invention to improve a monochrome image formation apparatus in which lack of color information is reduced.
It is another object of the present invention to perform various image processing in real time.
According to the present invention, the foregoing objects are attained by providing an image processing apparatus which inputs a color image signal, and generates and outputs pattern data corresponding to the color of the color image signal, comprising: color discrimination means for inputting a color image signal and discriminating the colors of the color image signal; pattern generation means for generating a pattern which is predetermined to correspond to the color discrimination signal from the color discrimination means; and brightness conversion means for converting the brightness of the pattern in accordance with the result of the color discrimination.
Furthermore, according to the present invention, the foregoing objects are attained by providing an image processing method which inputs a color image signal, and generates and outputs pattern data corresponding to the color of an image signal, comprising the steps of: inputting the color image signal and discriminating the color of the color image signal; generating the predetermined pattern corresponding to every color; and converting the brightness of the pattern according to the result of the color discrimination of the color image signal.
Other features and advantages of the present invention will be apparent from the following description taken in conjunction with the accompanying drawings, in which like reference characters designate the same or similar parts throughout the figures thereof.
BRIEF DESCRIPTION OF THE DRAWINGS
FIGS. 1A and 1B are block diagrams illustrating the circuit construction of an image processing apparatus of a digital copier according to a first embodiment of the invention;
FIG. 2 is a diagram illustrating the construction of a CCD sensor;
FIG. 3 is a timing chart of a CCD driving pulse signal;
FIG. 4 is a block diagram illustrating the construction of a CCD driving pulse generation circuit;
FIG. 5 is a timing chart of the CCD driving pulses;
FIG. 6 is a block diagram illustrating the construction of a black correction circuit of a digital copier according to the present embodiment;
FIG. 7 is a diagram illustrating the concept of a black correction in a black correction circuit of the digital copier according to the present embodiment;
FIG. 8 is a block diagram illustrating a white correction circuit of the digital copier according to the present embodiment;
FIG. 9 is a diagram illustrating the concept of the white correction in the white correction circuit of the digital copier according to the present embodiment;
FIG. 10 is a format diagram of data for a white board in the white correction circuit of the digital copier according to the present embodiment;
FIG. 11 is a flowchart illustrating the procedure of white correction in the white correction circuit of the digital copier according to the present embodiment;
FIG. 12 is a diagram of the brightness signal generator of the digital copier according to the present embodiment;
FIG. 13 is a diagram illustrating a color space where color discrimination is performed by hue;
FIG. 14 is a block diagram illustrating the construction of a color discrimination circuit of the digital copier according to the present embodiment;
FIG. 15 is a block diagram illustrating the construction of a pattern generation circuit of the digital copier according to the present embodiment;
FIGS. 16A-16F are the color patterns corresponding to each color which is output from the pattern generation circuit of the digital copier according to the present embodiment;
FIG. 17 is a block diagram illustrating the construction of the pattern composite circuit of the digital copier according to the present embodiment;
FIG. 18 is a diagram for explaining the effect in the color border area in the digital copier according to the present embodiment;
FIG. 19 is a diagram for explaining the edge adding effect in the digital copier according to the present embodiment; and
FIG. 20 is a block diagram illustrating the construction of the pattern composite circuit of the digital copier according to the other embodiment.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
Preferred embodiments of the present invention will now be described in detail in accordance with the accompanying drawings.
FIGS. 1A and 1B are block diagrams illustrating the circuit construction of the image processing apparatus of the digital copier according to the present embodiment.
In FIG. 1A, numeral 101 refers to a CCD (charge coupled device), an image sensor (color reading sensor), for converting the color original image, which is formulated on the image reading face of the CCD 101 using color separation filters, into electrical signals of G (green), B (blue), and R (red) through an optical system such as a rod lens. The numeral 102 refers to an amplifying circuit which amplifies the image output signal from the CCD 101, which amplified image signal is output to a coaxial cable 103. The numeral 104 refers to a S/H (sample and hold) circuit which performs a S/H on the color image signal outputted from the amplifying circuit 102 through the coaxial cable 103, and outputs the S/H signal as a color signal of G, B, and R. The numeral 105 refers to an A/D (analog/digital) conversion circuit which converts the analog color image signal on which the S/H is performed in the S/H circuit 104 into a digital color image signal. The numeral 106 refers to a position correction circuit which electrically corrects the reading position of each channel of the CCD 101, and the numeral 107 refers to a black/white correction circuit which performs black level correction and white level correction (shading correction), to be described later, on the digital image signal.
In FIG. 1B, numeral 108 refers to a brightness signal generator which generates brightness signals from the digital color image signal in which the black and white corrections have been processed. The numeral 109 refers to a color discrimination circuit which discriminates the color of each pixel of the digital color image in which the black and white corrections have been processed. According to the present embodiment, the color discrimination circuit outputs the color discrimination signal 131 according to the hue signal and the shift signal 132 which indicates a shift from the center of a discrimination range (shift from the representative value of the hue). The numeral 110 refers to a pattern generation circuit comprising a storage medium of a RAM or ROM which outputs a predetermined pattern corresponding to each color in accordance with the result of color discrimination by the color discrimination circuit 109. In the present embodiment, the pattern generation circuit 110 is set to recognize the color discrimination signal 131 corresponding to the hue signal as a reading address and to output a pattern which is stored in the memory in advance.
The numeral 111 refers to a pattern composite circuit which outputs either the brightness signal 133 generated in the brightness signal generator 108 according to the HIT signal 135 or the pattern signal 134, indicating what color is generated from the pattern generation circuit 110, which is converted according to the shift signal 132. The numeral 112 refers to a LOG convertor which converts a brightness signal from the pattern composite circuit 111 into the density signal and outputs the density signal to a connected printer. Furthermore, section A in FIG. 1A, which is enclosed by a chain line, corresponds to the video image processing circuit of the image reader (image scanner).
In the digital copier according to the present embodiment, a full-color original image is exposed by a light source such as a halogen lamp or fluorescent lamp (not shown) and the reflected color image from the color original image is picked up by a color image sensor such as CCD. Then, an analog image signal which is obtained by the image sensor is digitized by a A/D convertor. The printed image is obtained in a manner such that after the digitized full-color image signal has been processed, the signal is output to a image forming apparatus such as a thermal printer, ink jet printer, or laser beam printer (not shown). The detailed process is described below. The ink jet printer here includes a so-called bubble jet printer as shown in U.S. Pat. No. 4,723,129.
First, the color original is irradiated by an exposure lamp (not shown) and reflected light from original is separated into RGB by the color separation filers. Then, the signal is input into the color reading sensor 101 and is amplified to a predetermined level by the amplifying circuit 102. The CCD 101 is driven by a clock signal which was generated by the system pulse generator (not shown).
FIG. 2 is a diagram illustrating the arrangement of each sensor tip (sensor device) of the color reading sensor 101 and FIG. 3 illustrates the timing of the driving pulse of each sensor device.
In FIG. 2, the sensor 101 comprises five sensor devices which are arranged in zigzags to read data in a manner such that the main scanning direction is divided into five parts. In each sensor device, 63.5 μm is predetermined as one pixel and the pixel data for 1024 pixels can be read at a density of 400 dpi (dot/inch). Furthermore, as shown in FIG. 2, one pixel is divided into three parts, in the order of G, B, and R, in the main scanning direction. Thus, each sensor device comprises 3072 (1024×3) effective pixels in total. On the other hand, each sensor device 58a-62a is formed on a single ceramic substrate. It is arranged so that the first (1st), third (3rd), and fifth (5th) devices of the sensor 101 (58a, 60a, 62a) are formed on the line LA and the second (2nd) and fourth (4th) devices (59a, 61a) are on the line LB, which are located four lines apart from the line LA (63.5 μm×4=254 μm). In the case where an original image is read, the image is scanned to the AL direction.
Of the five sensor devices in FIG. 2, the 1st, 3rd, and 5th devices (58a, 60a, 62a) and the 2nd and 4th devices (59a, 61a) are independently synchronized and respectively driven by the driving pulse groups ODRV (118a) and EDRV (119a).
The driving pulses of the sensor devices are now described.
As shown in FIG. 3, pulse signals O01A, O02A, and ORS included in the driving pulse group ODRV (118a), and pulse signals E01A, E02A, and ERS included in the driving pulse group EDRV (119a) are categorized as charge transfer clocks (O01A, O02A, E01A and E02A) and charge reset pulses (ORS and ERS). These pulse groups are totally synchronized and generated so that jitter will not be generated for suppression of noise and interfaces between the 1st, 3rd, and 5th devices and the second and fourth devices. For this purpose, these pulses are generated in synchronization with a clock signal from the reference oscillating source OSC (not shown).
FIG. 4 is a block diagram illustrating the construction of the circuit which generates the aforementioned driving pulse groups ODRV (118a) and EDRV (119a). FIG. 5 is a diagram illustrating the timing of the driving pulses. This circuit block is included in the system control pulse generator (not shown).
The clock KO (135a) which divides the source clock CLKO generated from a single reference oscillating source OSC (558a) is a clock signal generating the reference signals SYNC 2 and SYNC 3 which determines the timing of generating the sensor driving pulses ODRV and EDRV. The output timing of the reference signals SYNC 2 and SYNC 3 are determined according to a set value of presettable counters 64a and 65a which are set by signal line 22 connected to the CPU omnibus. Furthermore, the reference signals SYNC 2 and SYNC 3 respectively initialize a frequency divider 66a and driving pulse generator 68a, and a frequency divider 67a and driving pulse generator 69a.
That is, since the reference signals SYNC 2 and SYNC 3 refer to the horizontal synchronizing signal HSYNC (118), which is input into the present block as a standard, and are formed by the source clock CLKO which is output from the signal oscillating source OSC (558a) and frequency divider clock which is generated by synchronization, each pulse group of ODRV (118a) and EDRV (119a) is obtained as synchronized signals without jitter. The signal turbulence caused by the interface among the sensor devices (58a, 59a, 60a, 61a, and 62a) can be prevented.
The sensor driving pulse ODRV (118a) which was obtained by synchronization is supplied to the 1st, 3rd, and 5th sensor devices (58a, 60a, and 62a) and the sensor driving pulse EDRV (119a) is supplied to the 2nd and 4th sensors (59a and 61a). Each of sensor devices independently output the video signals V1-V5 by synchronization with the driving pulse. As shown in FIG. 1A, each of the video signals V1-V5 is amplified to a predetermined voltage by the amplifying circuits 501-1 through 501-5 which are independent among the channels. Each of the video signals V1-V5 are amplified to a predetermined voltage by the amplifying circuits 501-1 through 501-5 as shown in FIG. 1A. The signals of V1, V3, and V5 are transmitted at the timing of the OOS (129) and the video signals V2 and V4 are transmitted at the timing of the EOS (134) through the coaxial cable 103. The voltage signals V1-V5 are input into the S/H circuit 104 of the video image processing circuit and held therein.
As described above, the color image signals, which were obtained in a manner such that the five sensor devices read the original image by dividing it into five parts, are separated into G (green), B (blue), and R (red), and the S/H is performed on the signals in the S/H circuit 104. Therefore, the signal processing of the system 3×5=15 is performed on the signal on which the S/H has been already performed.
The analog color image signals which are sampled and held in every R, G, and B in the S/H circuit 104 are digitized in each channel of CH 1-CH 5 in the A/D conversion circuit 105. The digitized signals are output to the position correction circuit 106 independently, but in parallel.
As described above, since the CCD sensor 101 has the space for four lines (63.5 μm×4=254 μm) in the sub-scanning direction and reads the original image by zigzag type sensor devices which divide the area into five areas in the main scanning direction, the image reading position of channels CH 2 and CH 4 is shifted four lines from that of channels CH 1, CH 3, and CH 5 in the sub-scanning direction. In order to connect the image signal from each sensor device in the main scanning direction, the position correction is performed in the sub-scanning direction by the position correction circuit 106 having the memory for a plurality of lines.
Then, the operation for black correction in the black/white circuit 107 is described along with FIGS. 6 and 7.
As shown in FIG. 7, the black level output signals of the channels CH 1-CH 5 greatly varies among the sensor devices and pixels in the case where a small quantity of light which is input to the CCD sensor 101. If such image signal is output to a printer as it is, lines and unevenness may appear in the printed image data (including patterns to be described later).
Therefore, a variation needs to be eliminated by corrections in the black/white correction circuit 107 as shown in FIG. 6. Prior to an image reading operation, the original image scanning unit is moved to the position of a black board having an even density which is arranged at the tip of the original image stand which is out of the image reading area. Successively, a halogen lamp is lit, and the black board is read, and then the black level image signal is input.
In order to describe the black correction operation in the black correction circuit 77aB for a blue signal (Bin), the operation that the image data of the black level for one line is stored in the black level RAM 78a is described first. The selector 82a selects the A input by the selection signal 604 and the gate 80a is closed by the gate signal 601, while the gate 81a is opened by the gate signal 602. That is, the data path is connected such as 151a→152a→153a. On the other hand, the address input 155a of RAM 78a is initialized by the HSYNC and the A input of the selector 83a is selected by the selection signal 603 so that the output 154a of the address counter 84a which counts VCLK is input to the selector 83a and output as the address of the RAM 78a. In this way, the black level signal for one line is stored in the RAM 78a (referred to as a black reference value reading mode).
On the other hand, in the case of reading an image, the RAM 78a operates in a data reading mode and the black level data, which was read out in every one pixel in each line of the RAM 78a, is input to the B input of the subtracter 79a via the data line 153a→157a. Then, the result of the operation (A input)-(B input) is output from the subtracter 79a. At this time, the gate 81a is closed by the gate signal 602, while the gate 80a is opened by the gate signal 601. Side A is selected by the selection signal 605 and the output is output to the side A. Therefore, to the black level data DK(i) which is stored in the RAM 78a, in the case where the blue signal is input, the output of the black correction circuit 156a is Bin (i)-DK(i)=Bout (i) (referred to as black correction mode). Furthermore, "i" refers to a variable representing the image signal which is read by the i-th optical conversion device of the CCD sensor 101.
Similarly, in the case where the green Gin or red Rin signal is input, the similar control is performed in the block 77aG or 77aR. In this way, the image signals Bout, Gout, and Rout on which the black correction is performed in each color component is output.
Furthermore, the control signal of each selector gate and the gate signals 601-605 for the above-described controls are formed under control of a CPU (not shown) as the output of the latch 85a is allotted as an I/O of the CPU. The CPU is enabled to access the RAM 78a when the side B is selected by the selectors 82a, 83a, and 86a.
Then, accompanying with FIG. 8, the white level correction (shading correction) in the black/white correction circuit 107 is described. This correction is to correct variation in the light system, optical system, and sensitivity of the sensor on the basis of white color data when the original image scanning unit is moved to a position to irradiate the white board, which when read has an even density. The basic construction of white correction circuit is shown in FIG. 8. The basic construction of this circuit is similar to that of the black correction circuit in FIG. 6. In the black correction circuit, the correction is performed in a manner such that a black image signal is stored in the RAM 78a and the value is output to the subtracter 79a. The correction is thus performed by subtraction. The difference in the white correction circuit is that the white data is stored in the RAM 78b, and the value is output to the multiplier 79b and multiplied by the image signal. Since the operation in the other parts are similar to those of the black correction, an explanation is omitted.
The operation of the white color correction is now described. When the CCD 101 for reading the original image locates at the reading position for reading the white board (home position), the exposure lamp (not shown) is lit prior to the operation of either duplication or reading, and the white board is read. The image data of the white level having an even density which was read in this way is stored in the correction RAM 78b which has memory capacity capable of storing the pixel data for one line. For example, if the main scanning direction has a width corresponding to the length of the longitudinal side of A4 size paper, 4752 (16×294 mm) pixels at 16 pel/mm, at least 4752 bytes, are necessary for the memory capacity of the RAM 78b.
FIG. 9 illustrates the image signal when the white board was read. As shown in FIG. 9, if the white color data is Wi (i=0-4751) of the i-th pixel, the white color data from the white board is stored in every pixel as shown in FIG. 10.
Then, to the white color data Wi of the i-th pixel, the corrected data D0 to the reading value Di of the i-th pixel Di of an ordinary image is obtained by D0 =Di ×EFH /Wi. The "FFH " indicates "255" in hexadecimal notation.
Then, the CPU outputs the gate signals which open the gates 80b and 81b to the latch circuit 85b and the selection signals 801-805 which select the B input in the selectors 82b, 83b, and 86b. Thus, the RAM 78b can be accessed by the CPU.
FIG. 11 is a flowchart illustrating the procedure for forming the white correction data and storing the data in the RAM 78b.
First, in step S1, the pointer i is set to "0" and the white color data Wi which is stored in the RAM 78b is read. In step S2, the operation of FFH /Wi is performed on the white color data Wi. The operations such as W0 =FFH /W0, Wi =FFH /Wi, . . . are consecutively performed and the white color data which is stored in the RAM 78b is converted, as in FIG. 10. In this way, when the image data processing of the blue component to the 4752 pixels in the main scanning direction of the color component image (Step B in FIG. 11) is completed, similarly the processings for the green component (Step G) and red component (Step R) are consecutively performed. Thus, the white correction data is stored in the RAM 78 corresponding to each circuit of 77bB, 77bG, and 77bR in FIG. 8.
Then, the gate 80b is opened by the gate signal 801 and the gate 81b is closed by the selection signals 802 so that the operation D0 =Di ×FFH /Wi is performed on the input original image data Di. The side A is selected by the selection signals 803 and 805 in the selectors 83b and 86b. In this way, the counter data FFH /Wi which was read out of the RAM 78b is input to the multiplier 79b through the signal path 153b→157b. Then, the quotient of FFH /Wi is multiplied by the original image data 151b which was input from the A input terminal and the product (Di ×FFH /Wi) is obtained and output.
As described above, variations, e.g. black level sensitivity in the image input device, dark current of the CCD 101, light quantity in the optical path where light is input to the sensor, and black and white levels which are generated on the basis of the facts of white level sensitivity, are corrected. The image data R out 121, G out 122, and Bout 123 (FIG. 1A) which are corrected in each color by the black and white corrections are obtained over the main scanning direction from the black correction signals Bout, Gout, and Rout and the white correction outputs B out 100, G out 101, and R out 102.
The image data R out 121, G out 122, and B out 123 on which the black and white corrections have been performed are input to the brightness signal generator 108 and the color discrimination circuit 109.
The brightness signal generator 108 standardizes the filter image which was read by the CCD sensor 101 and forms an ND image. Refer to FIG. 12 for an explanation of the above-described operation. The input image data R out 121, G out 122, and B out 123 are added by the adder 201, and the sum is divided by three by the divider 202, and the average is obtained and output as the brightness signal 133.
The color discrimination circuit 109 is now described. The present embodiment utilizes a hue signal as a color discrimination signal. This enables the performance of an accurate color discrimination when the comparing colors are the same, and the intensity or saturation is different from one other. The outline of the color detection method is first described.
Each of the input R, G, B data (Rout, Gout, and Bout) is 8 bits having the information of 224 colors in total. If such information carrying considerably large data is used as is, the apparatus must be expensive because of the size of the circuit. In order to reduce the cost, the reduction of the circuit size can be achieved in the following way.
As described above, the present embodiment utilizes the hue signal. Precisely speaking, the hue according to the present embodiment is different from the ordinary hue, however, it is here referred to as "hue signal" for reasons of convenience. It is known that the color space can be expressed in 3 dimensions comprising saturation, hue, and intensity such as Munsell's cube. First, it is necessary that the R, G, B data in 3-dimensions is converted to 2-dimensional data. In general, since the common area of the R, G, B data, that is, the minimum value min (R, G, B) of the R, G, and B indicates an achromatic color component, the min (R, G, B) is subtracted from each R, G, B data and the 3-dimensional input color space is converted into the 2-dimensional color space by utilizing the rest of information as chromatic color components. The plain ring 0°˜360° to which the 3-dimensional space is converted is divided into six as shown in FIG. 13. The hue is obtained by the comparison of R, G, B data such as the information of R>G>B, R>B>G, G>B>R, G>R>B, B>G>R, and B>R>G, the maximum and middle values in the inputted R, G, B data, and the LUT (look up table).
Then, the actual operation of the color discrimination circuit 109 is explained along with FIG. 14.
First, the detection circuit 1401 compares the sizes of Rout, Gout, and Bout data on which the black/white corrections have been performed. In the circuit 1401, each input data is compared by the comparator and outputs a maximum (max) value, middle (mid) value, or minimum (min) value according to the result of comparison. The output value of the comparator is determined and output as the rank signal 1414. The min value is subtracted from the max and mid values by the subtracters 1402 and 1403 so that the achromatic color components are reduced from the max and mid values. The subtracted value is input into the hue detection circuit 1404 with the rank signal 1414.
The hue detection circuit 1404 comprises the storage device capable of random access such as RAM or ROM. In the present embodiment, the hue detection circuit 1404 comprises the look up table using ROM. In the ROM, the values corresponding to the angles of plain are stored in advance as shown in FIG. 13 and the corresponding hue value is output from the input rank signal 1414, (max-min) value, and (mid-min) value. The hue value outputted in this way is then input into the window comparators 1405 and 1406.
The value set in the window comparators 1405 and 1406 is the hue data value corresponding to the hue value which is set to the offset value by the CPU in the case where the hue value of the color data to be patternized by the data input means (not shown) is input. If the value which was set in the comparator 1405 is a1, it is arranged so that, in the case where (hue data)>a1, "1" is output from the comparator 1405 to the hue data which is input from the hue circuit 1404. If the value set in the comparator 1406 is a2 (a1 <a2), it is arranged so that, in the case where (hue data)<a2, "1" is output from the comparator 1406. In this way, in the case where a1 <(hue data)<a2, "1" is output from the AND gate 1410.
Furthermore, the CPU 1407 sets the middle value (a1 +a2)/2 of the two values (a1 and a2), which are set in the window comparators 1405 and 1406, in the subtracter 1408. Then, the output value 131 from the hue detection circuit 1404, and the difference value (absolute value) between the output value 131 from the hue detection circuit 1404 and the middle value of the color discrimination range which was obtained by the subtracter 1408 is obtained and output as the shifted distance 132. The difference value is input into the buffer 1409 and controlled by the output from the AND circuit 1410. That is, in the case where the output of the AND circuit 1410 is "1", the stored difference value is output from the buffer 1409 as the shifted distance 132. On the other hand, in the case where the output of the AND circuit 1410 is "0", where a1 <(hue data)<a2 is not satisfied, the output of the buffer 1409 becomes high impedance.
In FIG. 14, the section 14(a) which is enclosed by a dotted line corresponds to the construction of the circuit 14(b). According to the present embodiment, it is set so that two colors can be discriminated. If colors to be discriminated are added, the section which is enclosed by a dotted line 14(b) needs to be added. For the latter description, the sections which are enclosed by a dotted line are respectively referred to as 14(a) and 14(b). Furthermore, to be described later along with FIG. 15, the color discrimination signal 131 is 5 bits in order to discriminate five colors.
The outputs of the buffer 1409 and buffer 1411 are common. As understood from this, the CPU 1407 needs to control values which are set in the window comparator so that the outputs of the buffer 1409 and buffer 1411 will not be active at the same time. In other words, it is set so that the window ranges in the sections enclosed in the dotted line should not overlap. The common outputs of the buffers 1409 and 1411 are pulled up by the resistance array 1412. This is because the output level of the shifted distance 132 is to be at a high level in the case where the outputs of the AND circuit 1410 of the color detection circuits 14(a) and 14(b) are all "0". That is, in the case where the input R, G, and B image signals cannot be discriminated by any colors, the data in which all the bits are "1" are output as the shifted distance 132.
Then, the construction of the pattern generation circuit 110 is described along with FIG. 15.
In the ROM 803 for pattern generation, the dot pattern data corresponding to each color is written in advance as shown in FIGS. 16A-16F. Each pattern is set 16×16 dot as one pattern. The ROM 803 for pattern selects the pattern data according to the color discrimination signal 131 which is input from the color discrimination circuit 109. The pattern data generation processing is performed by repeatedly reading the data to the main scanning direction and sub-scanning direction. The main scanning counter 802 synchronizes with the horizontal synchronizing signal HSYNC and is operated when the video clock CLK is counted. The sub-scanning counter 801 synchronizes with the ITOP signal and is operated when the horizontal synchronizing signal HSYNC is counted.
The outputs of the counters 801 and 802 are both 4 bits and the 5-bit color discrimination signal 131, 13 bits in total, are input as the address of the ROM 803 for patterns. In other words, it is set up that the 32 (213 ÷(24 ×24)=25) kinds of the 16 dot×16 dot pattern data can be generated corresponding to the (kind of) read color.
The output data from the ROM 803 for patterns is 8 bits and the most significant bit MSB is utilized as the control signal (HIT signal 135) in the pattern composite circuit 111 which will be described later. Therefore, in the ROM 803 for patterns, the data that the MSB is normally "0", but "1" when the pattern is output as it is written in.
Of course, the ROM 803 for patterns can be replaced by RAM. If the RAM is used, the capacity and bit allotment of an address are similar to these of the ROM.
Then, the construction of the pattern composite circuit 111 is described.
The shifted distance 132 which was output from the color discrimination circuit 109 is inverted by the inverter circuit 1004 and input into the pattern controller 1001. The pattern signal 134 which was output from the pattern generation circuit 110 is converted to the density signal and output. For example, suppose that the shifted distance 132 is the 8 bit data, and the pattern controller 1001 has eight AND circuits 1003 and each bit of the shifted distance is input into each AND circuit. On the other hand, the pattern signal 134 of 1 bit is input into the other input terminals of each AND circuit. The AND circuit 1003 is opened according to the value of the shifted distance 132 and 1 bit of the pattern signal 134 is converted to the 8 bits brightness signal. Thus, the pattern information 1005 having the brightness according to the shifted distance 132 is output from the patten controller 100.
Furthermore, the brightness information 133 which is input from the brightness signal generator 108 is output from the selector 1002 according to the HIT signal 135 which is output from the pattern generation circuit 110. That is, in the case where the HIT signal 135 is "0" (the pattern signal 134 is not output), the brightness information 133 is selected and output. On the other hand, in the case where the HIT signal 135 is "1", the selector 1002 selects and outputs the pattern information 1005 which is output from the pattern controller 1001. In this way, the HIT signal 135 is a signal indicating whether or not the color is discriminated. If the color is not discriminated by any color (HIT="0"), the brightness information 133, that is, the brightness information of an input image signal is output as it is. On the other hand, if the color is discriminated (HIT signal="1"), the patten information 1005 having the brightness information which according to the shifted distance 132 is output.
Then, the actual processing is described along with FIG. 18.
In the drawing the numeral 1101 refers to an actual original image and the rectangles in red and yellow are described as the drawing. In the conventional embodiment that the hue of the original image is discriminated and patternized, a color discrimination error is caused at the border of red and yellow. This case is described referring to the drawings 1102 and 1103. The drawing 1102 illustrates the case where the color is patternized and printed and the drawing 1103 is an enlarged drawing. In order to simplify the drawing, the illustrated patterns in FIG. 18 are different from the color patterns indicated in FIGS. 16A-16F. As apparent from the drawing 1103, the patterns for red and yellow coexist at their border and the output becomes not clear as a result.
In contrast, the numerals 1104 and 1105 indicate the result obtained by the present embodiment. The obscurity on the output which was caused by the coexistence of the patterns is eliminated by reducing the detection density at the border of the colors where the color discrimination error occurred.
The graph 1105 indicates the change of the pattern density in the pattern output which is indicated by the drawing 1104. In the graph, the y-axis represents density and x-axis represents the position information of the pattern which corresponds to the pattern output 1104. As apparent from this, since the density y is lowered around the border of the two colors, the coexistence of patterns is de-emphasized.
Then, the LOG convertor 112 is described. The image data, which was composited by the pattern composite circuit 111, is input to the LOG convertor 112 as shown in FIG. 1 so that the brightness/density conversion is performed and converted into the density signal. In the LOG convertor 112, the brightness/density conversion is performed by the look up table using ROM. The signal which was converted into a density signal in the LOG convertor 112 is output to a monochrome printer where the image is formed (e.g. laser beam printer).
[Second Embodiment (FIG. 19)]
The method where the image information is forcefully changed to black in the part where the distance shifted from the center of the discrimination range is larger than a predetermined level is now described.
First, the effect of the present method is described along with FIG. 19.
The drawing 1501 represents the color original image. The color separation is performed on the original image by the hue discrimination and each color is patternized. The result is shown in the drawing 1502. As understood from this drawing, red is expressed in lateral stripe and yellow is in vertical stripe.
The drawing 1503 is the drawing that the border part is enlarged. At the border of the colors, the color discrimination error has occurred and the lateral stripe pattern and the vertical stripe patten are partially overlapped.
As described in the first embodiment, in the case where the shifted distance from the center of the discrimination range is larger than the predetermined value, when the processing to reduce the brightness (to increase the density) is performed, the border where the color discrimination error occurred becomes black as shown in the drawing 1504. The drawing 1505 represents the whole image in this method. The black edge surrounds the areas of each color, and thus, the clear output image can be obtained.
The drawing 1506 illustrates the conventional method that the edge of the brightness information is extracted and it is emphasized. However, in this case, since there is little difference between red and yellow, an edge is not formed at the border of colors.
Thus, in comparison with that the conventional method that an edge extraction and edge addition on the basis of brightness information, the advantage of the present embodiment is that an edge is added if a hue is changed.
Now, the difference between the first embodiment and the second embodiment is described along with FIG. 20.
FIG. 20 is a block diagram illustrating the construction of the pattern composite circuit according to the second embodiment, which corresponds to the pattern composite circuit 111 as shown in FIG. 17.
The brightness signal 133 which is generated in the brightness signal generator 108 and the pattern signal 134 which is generated in the pattern generation circuit 110 are selected by the selector 1601 according to the HIT signal 135. That is, the selector 1601 selects the pattern signal 134 in the case where the color is discriminated, while in the case where the color is not discriminated, the brightness information 133 is selected and output.
On the other hand, the shifted distance 132 which is generated from the color discrimination circuit 109 is compared with the threshold which is set by the CPU 1605 in the comparator 1604. In the case where the shifted distance 132 is larger than the predetermined threshold (A input>B input), "1" is output from the comparator 1604. The result of comparison is inverted by the inverter 1603 and the output of the selector 1601 is controlled in the NAND gate circuit 1602. Thus, in the case where the shifted distance 132 is larger than the threshold set by the CPU 1605, the data "1" is output from the NAND circuit 1602 and the brightness information becomes "1", that is, the image is blackened. On the other hand, in the case where the shifted distance 132 is less than the threshold, the pattern signal 134 or the brightness information 133 output from the selector 1601 is selected and output to the LOG convertor 112.
Thus, for the pattern composite circuit 111 according to the aforementioned embodiment, by adopting the pattern composite circuit shown in FIG. 20, the effect of adding edges can be obtained as shown in FIG. 19.
The present invention can be applied to a system comprising a single device or a plurality of devices. The present invention can also be applied to the case where the program executing the precessing which is directed by the present invention is supplied to a system or an apparatus.
As described above, according to the present embodiments, the clear output is obtained because of reduction of the pattern density at the color border part of a plurality of colors and the color discrimination error is eliminated in a manner such that the color of the image data is discriminated and the distance from the center of the discrimination range of the input image data is reflected on the density of the pattern data expressing the discriminated color.
Furthermore, the shifted distance can be discriminated at multiple-stages and the density can be changed at the multiple-stages according to the shifted distance.
Furthermore, the density after the LOG conversion can be changed instead of the brightness signal as in the aforementioned embodiment.
Furthermore, the processing can be arranged according to the shifted chroma rather than the shifted hue.
Furthermore, it is not necessary that an extraction of hue is performed in the R, G, B space. It can be performed on the plain, for example, a*b* plain of the L*, a*, b* space.
As described above, according to the present invention, it is prevented that patterns coexist at the color changing point in the original image and the color of the original image is patternized and output.
As many apparently widely different embodiments of the present invention can be made without departing from the spirit and scope thereof, it is to be understood that the invention is not limited to the specific embodiments thereof except as defined in the appended claims.

Claims (22)

What is claimed is:
1. An image processing apparatus comprising:
input means for inputting a color image;
color discrimination means for discriminating a color of the color image input by said input means;
pattern generation means for generating a pattern image in accordance with the color discriminated by said color discrimination means;
difference detecting means for detecting a color difference between the color discriminated by said color discrimination means and a predetermined color; and
control means for controlling a density of the pattern image generated by said pattern generation means in accordance with the color difference detected by said difference detecting means.
2. The image processing apparatus according to claim 1, wherein the pattern image is defined corresponding to a predetermined color range.
3. The image processing apparatus according to claim 1, wherein said color discrimination means outputs a value of hue of the color image input by said input means and said pattern generation means generates a predetermined pattern image in a case where the value output by said color discrimination means is between a first value and a second value of hue.
4. The image processing apparatus according to claim 1, further comprising output means for outputting a black and white image if said color discrimination means discriminates a black and white portion of the color image and for outputting a pattern image generated by said pattern generation means if said color discrimination means discriminates a color portion of the color image.
5. The image processing apparatus according to claim 4, wherein said output means outputs an image to a printer unit.
6. The image processing apparatus according to claim 1, wherein said control means makes the density of the pattern image lighter in a case where the color difference detected by said difference detecting means is greater than a predetermined value.
7. The image processing apparatus according to claim 1, wherein said pattern image is a repeated pattern of a predetermined pattern.
8. The image processing apparatus according to claim 1, wherein said input means inputs a color image by using an image scanner.
9. The image processing apparatus according to claim 1, wherein said pattern image is a black and white image.
10. The image processing apparatus according to claim 1, wherein said pattern generating means generates the same patterns to all images having values of hue between a first and second value of hue.
11. The image processing apparatus according to claim 1, wherein said pattern generation means generates a first pattern image for an image having a first color and a second pattern image for an image having a second color.
12. An image processing method comprising the steps of:
inputting a color image;
discriminating a color of the color image;
generating a pattern image in accordance with the color discriminated in said discriminating step;
detecting a color difference between the color discriminated by said discrimination step and a predetermined color; and
controlling a density of the pattern image generated by said generating step in accordance with the color difference detected by said detecting step.
13. The image processing method according to claim 12, wherein said pattern image is defined corresponding to a predetermined color range.
14. The image processing method according to claim 12, wherein in the discrimination step, a value of hue of the color image input by said input means is output and in the generation step, a predetermined pattern image is generated in a case where the value output by the discrimination step is between a first value and a second value of hue.
15. The image processing method according to claim 12, further comprising the step of:
outputting a black and white image if black and white portion of the color image is discriminated; and
outputting a pattern image generated by the generating step if a color portion of the color image is discriminated.
16. The image processing method according to claim 12, wherein the black and white image or the pattern image is output to a printer unit.
17. The image processing method according to claim 12, wherein in the controlling step, the density of the pattern image is made lighter in a case where the color difference detected by the detecting step is greater than a predetermined value.
18. The image processing method according to claim 12, wherein the pattern image is a repeated pattern.
19. The image processing method according to claim 12, wherein the color image is input by using an image scanner.
20. The image processing method according to claim 12, wherein the pattern image is a black and white image.
21. The image processing method according to claim 12, wherein in the generating step, the same patterns are generated for all of the images having values of hue between a first and second value of hue.
22. The image processing method according to claim 12, wherein in the generating step, a first pattern image is generated for an image having a first color and a second pattern image is generated for an image having a second color.
US08/345,327 1991-05-29 1994-11-21 Image processing method and apparatus Expired - Fee Related US5592310A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US08/345,327 US5592310A (en) 1991-05-29 1994-11-21 Image processing method and apparatus

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP3-124338 1991-05-29
JP12433891A JP3178541B2 (en) 1991-05-29 1991-05-29 Image processing method and apparatus
US88858692A 1992-05-26 1992-05-26
US08/345,327 US5592310A (en) 1991-05-29 1994-11-21 Image processing method and apparatus

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US88858692A Continuation 1991-05-29 1992-05-26

Publications (1)

Publication Number Publication Date
US5592310A true US5592310A (en) 1997-01-07

Family

ID=14882881

Family Applications (1)

Application Number Title Priority Date Filing Date
US08/345,327 Expired - Fee Related US5592310A (en) 1991-05-29 1994-11-21 Image processing method and apparatus

Country Status (2)

Country Link
US (1) US5592310A (en)
JP (1) JP3178541B2 (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0835024A2 (en) * 1996-10-02 1998-04-08 Xerox Corporation Printing black and white reproducible color documents
US5838465A (en) * 1994-12-02 1998-11-17 Hitachi, Ltd. Color compensating method of color image and color image generating apparatus
US6118550A (en) * 1997-02-19 2000-09-12 Kyocera Mita Corporation Image processing apparatus
US6118895A (en) * 1995-03-07 2000-09-12 Minolta Co., Ltd. Image forming apparatus for distinguishing between types of color and monochromatic documents
US6240203B1 (en) * 1997-11-10 2001-05-29 Sharp Kabushiki Kaisha Image discriminating apparatus
EP1187456A2 (en) * 2000-09-12 2002-03-13 Xerox Corporation Pattern rendering system and method
US20050207641A1 (en) * 2004-03-16 2005-09-22 Xerox Corporation Color to grayscale conversion method and apparatus
US20080167533A1 (en) * 2005-02-28 2008-07-10 Petra Leyendecker Method and Device for the Assessment of Bowel Function
US20080284889A1 (en) * 2007-05-15 2008-11-20 Sony Corporation Image pickup apparatus and method of correcting captured image data
US20110141500A1 (en) * 2009-12-16 2011-06-16 Ricoh Company, Limited Image processing apparatus, image processing method, and computer program product
US20120062914A1 (en) * 2010-09-10 2012-03-15 Oki Data Corporation Image Processing Apparatus and Image Forming System
JP2013042380A (en) * 2011-08-17 2013-02-28 Seiko Epson Corp Image processing apparatus, image processing program, and image processing method
US8846091B2 (en) 2002-04-05 2014-09-30 Euro-Celtique S.A. Matrix for sustained, invariant and independent release of active compounds
US8969369B2 (en) 2001-05-11 2015-03-03 Purdue Pharma L.P. Abuse-resistant controlled-release opioid dosage form
US9271940B2 (en) 2009-03-10 2016-03-01 Purdue Pharma L.P. Immediate release pharmaceutical compositions comprising oxycodone and naloxone
US20160295070A1 (en) * 2015-03-30 2016-10-06 Brother Kogyo Kabushiki Kaisha Image Scanning Apparatus
US10071089B2 (en) 2013-07-23 2018-09-11 Euro-Celtique S.A. Combination of oxycodone and naloxone for use in treating pain in patients suffering from pain and a disease resulting in intestinal dysbiosis and/or increasing the risk for intestinal bacterial translocation

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6179485B1 (en) * 1996-11-18 2001-01-30 Xerox Corporation Printing black and white reproducible colored stroke documents
JP4730525B2 (en) * 2005-06-13 2011-07-20 富士ゼロックス株式会社 Image processing apparatus and program thereof

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4308553A (en) * 1980-03-03 1981-12-29 Xerox Corporation Method and apparatus for making monochrome facsimiles of color images on color displays
US4369461A (en) * 1979-11-02 1983-01-18 Canon Kabushiki Kaisha Method and apparatus for forming images
US4688031A (en) * 1984-03-30 1987-08-18 Wang Laboratories, Inc. Monochromatic representation of color images
JPS63177221A (en) * 1987-01-19 1988-07-21 Toshiba Corp Controller for output of hard copy
US4958217A (en) * 1986-02-27 1990-09-18 Canon Kabushiki Kaisha Image processing apparatus and method capable of extracting a particular image area using either hue or brightness
US5121230A (en) * 1987-01-19 1992-06-09 Canon Kabushiki Kaisha Image reading apparatus having adjusting circuits for matching the level of and compensating for fluctuation among a plurality of sensing elements
US5408343A (en) * 1991-07-19 1995-04-18 Canon Kabushiki Kaisha Image processor in which colors in an original color image are identified as predetermined patterns on a monochromatic copy of the original
US5444556A (en) * 1992-03-05 1995-08-22 Canon Kabushiki Kaisha Image forming apparatus for forming a pattern image corresponding to a color of an imput image

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4369461A (en) * 1979-11-02 1983-01-18 Canon Kabushiki Kaisha Method and apparatus for forming images
US4308553A (en) * 1980-03-03 1981-12-29 Xerox Corporation Method and apparatus for making monochrome facsimiles of color images on color displays
US4688031A (en) * 1984-03-30 1987-08-18 Wang Laboratories, Inc. Monochromatic representation of color images
US4958217A (en) * 1986-02-27 1990-09-18 Canon Kabushiki Kaisha Image processing apparatus and method capable of extracting a particular image area using either hue or brightness
JPS63177221A (en) * 1987-01-19 1988-07-21 Toshiba Corp Controller for output of hard copy
US5121230A (en) * 1987-01-19 1992-06-09 Canon Kabushiki Kaisha Image reading apparatus having adjusting circuits for matching the level of and compensating for fluctuation among a plurality of sensing elements
US5408343A (en) * 1991-07-19 1995-04-18 Canon Kabushiki Kaisha Image processor in which colors in an original color image are identified as predetermined patterns on a monochromatic copy of the original
US5444556A (en) * 1992-03-05 1995-08-22 Canon Kabushiki Kaisha Image forming apparatus for forming a pattern image corresponding to a color of an imput image

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5838465A (en) * 1994-12-02 1998-11-17 Hitachi, Ltd. Color compensating method of color image and color image generating apparatus
US6118895A (en) * 1995-03-07 2000-09-12 Minolta Co., Ltd. Image forming apparatus for distinguishing between types of color and monochromatic documents
EP0835024A2 (en) * 1996-10-02 1998-04-08 Xerox Corporation Printing black and white reproducible color documents
EP0835024A3 (en) * 1996-10-02 2003-08-13 Xerox Corporation Printing black and white reproducible color documents
US6118550A (en) * 1997-02-19 2000-09-12 Kyocera Mita Corporation Image processing apparatus
US6240203B1 (en) * 1997-11-10 2001-05-29 Sharp Kabushiki Kaisha Image discriminating apparatus
EP1187456A2 (en) * 2000-09-12 2002-03-13 Xerox Corporation Pattern rendering system and method
EP1187456A3 (en) * 2000-09-12 2003-07-09 Xerox Corporation Pattern rendering system and method
US6757078B1 (en) 2000-09-12 2004-06-29 Xerox Corporation Pattern rendering system and method
US9283221B2 (en) 2001-05-11 2016-03-15 Purdue Pharma L.P. Abuse-resistant controlled-release opioid dosage form
US9283216B2 (en) 2001-05-11 2016-03-15 Purdue Pharma L.P. Abuse-resistant controlled-release opioid dosage form
US9161937B2 (en) 2001-05-11 2015-10-20 Purdue Pharma L.P. Abuse-resistant controlled-release opioid dosage form
US9056051B2 (en) 2001-05-11 2015-06-16 Purdue Pharma L.P. Abuse-resistant controlled-release opioid dosage form
US8969369B2 (en) 2001-05-11 2015-03-03 Purdue Pharma L.P. Abuse-resistant controlled-release opioid dosage form
US9511066B2 (en) 2001-05-11 2016-12-06 Purdue Pharma L.P. Abuse-resistant controlled-release opioid dosage form
US9480685B2 (en) 2001-05-11 2016-11-01 Purdue Pharma L.P. Abuse-resistant controlled-release opioid dosage form
US9358230B1 (en) 2001-05-11 2016-06-07 Purdue Pharma L.P. Abuse-resistant controlled-release opioid dosage form
US9345701B1 (en) 2001-05-11 2016-05-24 Purdue Pharma L.P. Abuse-resistant controlled-release opioid dosage form
US9168252B2 (en) 2001-05-11 2015-10-27 Purdue Pharma L.P. Abuse-resistant controlled-release opioid dosage form
US10420762B2 (en) 2002-04-05 2019-09-24 Purdue Pharma L.P. Pharmaceutical preparation containing oxycodone and naloxone
US8846091B2 (en) 2002-04-05 2014-09-30 Euro-Celtique S.A. Matrix for sustained, invariant and independent release of active compounds
US9555000B2 (en) 2002-04-05 2017-01-31 Purdue Pharma L.P. Pharmaceutical preparation containing oxycodone and naloxone
US9655855B2 (en) 2002-04-05 2017-05-23 Purdue Pharma L.P. Matrix for sustained, invariant and independent release of active compounds
US9907793B2 (en) 2002-04-05 2018-03-06 Purdue Pharma L.P. Pharmaceutical preparation containing oxycodone and naloxone
US7382915B2 (en) 2004-03-16 2008-06-03 Xerox Corporation Color to grayscale conversion method and apparatus
US7760934B2 (en) 2004-03-16 2010-07-20 Xerox Corporation Color to grayscale conversion method and apparatus utilizing a high pass filtered chrominance component
US20050207641A1 (en) * 2004-03-16 2005-09-22 Xerox Corporation Color to grayscale conversion method and apparatus
US20080181491A1 (en) * 2004-03-16 2008-07-31 Xerox Corporation Color to grayscale conversion method and apparatus
US10258235B2 (en) 2005-02-28 2019-04-16 Purdue Pharma L.P. Method and device for the assessment of bowel function
US20080167533A1 (en) * 2005-02-28 2008-07-10 Petra Leyendecker Method and Device for the Assessment of Bowel Function
US20080284889A1 (en) * 2007-05-15 2008-11-20 Sony Corporation Image pickup apparatus and method of correcting captured image data
US8576305B2 (en) * 2007-05-15 2013-11-05 Sony Corporation Image pickup apparatus and method of correcting captured image data
US9271940B2 (en) 2009-03-10 2016-03-01 Purdue Pharma L.P. Immediate release pharmaceutical compositions comprising oxycodone and naloxone
US9820983B2 (en) 2009-03-10 2017-11-21 Purdue Pharma L.P. Immediate release pharmaceutical compositions comprising oxycodone and naloxone
US20110141500A1 (en) * 2009-12-16 2011-06-16 Ricoh Company, Limited Image processing apparatus, image processing method, and computer program product
US8717648B2 (en) * 2009-12-16 2014-05-06 Ricoh Company, Limited Image processing apparatus, image processing method, and computer program product
US20120062914A1 (en) * 2010-09-10 2012-03-15 Oki Data Corporation Image Processing Apparatus and Image Forming System
JP2013042380A (en) * 2011-08-17 2013-02-28 Seiko Epson Corp Image processing apparatus, image processing program, and image processing method
US10071089B2 (en) 2013-07-23 2018-09-11 Euro-Celtique S.A. Combination of oxycodone and naloxone for use in treating pain in patients suffering from pain and a disease resulting in intestinal dysbiosis and/or increasing the risk for intestinal bacterial translocation
US20160295070A1 (en) * 2015-03-30 2016-10-06 Brother Kogyo Kabushiki Kaisha Image Scanning Apparatus
US9628667B2 (en) * 2015-03-30 2017-04-18 Brother Kogyo Kabushiki Kaisha Image scanning apparatus

Also Published As

Publication number Publication date
JPH04351169A (en) 1992-12-04
JP3178541B2 (en) 2001-06-18

Similar Documents

Publication Publication Date Title
US5592310A (en) Image processing method and apparatus
EP0557099B1 (en) Image processing apparatus and method
EP0501814B1 (en) Image processing apparatus
US5557430A (en) Image processing apparatus generating patterns for colors based on a set relation between colors and patterns and synthesizing patterns with extracted monochromatic information
EP0400991B1 (en) Color image processing apparatus
US5784180A (en) Image memory apparatus
EP0446008B1 (en) Image processing method and apparatus
US5631983A (en) Image forming system for synthesizing color image data with binary image data which has been colored with a predetermined color during the synthesizing operation
EP0523999B1 (en) Image processing method and apparatus
US7054032B2 (en) Image processing apparatus and method
EP0542513B1 (en) Colour image reading apparatus
US5760929A (en) Image processing apparatus for processing discriminated color regions within specified boundaries
US5602655A (en) Image forming system for single bit image data
EP0662765B1 (en) Image processing apparatus and method
JP3226224B2 (en) Image processing device
JP3042911B2 (en) Image processing apparatus and image processing method
JP3206932B2 (en) Image processing method and apparatus
JP3359045B2 (en) Image processing apparatus and image processing method
JP3352106B2 (en) Image processing apparatus and method
JP3200090B2 (en) Image processing method and apparatus
JPH06178111A (en) Image processor
JP2815962B2 (en) Image processing apparatus and image processing method
JPH0810900B2 (en) Color image processor
JPH06125467A (en) Picture processing method and its device

Legal Events

Date Code Title Description
CC Certificate of correction
FEPP Fee payment procedure

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20090107